You searched for +publisher:"Georgia Tech" +contributor:("Romberg, Justin")
.
Showing records 1 – 30 of
70 total matches.
◁ [1] [2] [3] ▶

Georgia Tech
1.
Tanveer, Maham.
Classification of anomalous machine sounds using i-vectors.
Degree: MS, Electrical and Computer Engineering, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/62812
► The objective of the proposed work is to analyze and study the use of i-vectors for Anomalous Detection of Sounds (ADS) in Machines. I-vectors, to…
(more)
▼ The objective of the proposed work is to analyze and study the use of i-vectors for Anomalous Detection of Sounds (ADS) in Machines. I-vectors, to the best of our knowledge, have not been studied for machine sounds. We will be using the database ToyADMOS for testing both supervised and unsupervised ADS techniques using i-vectors generated from the data.
Advisors/Committee Members: Anderson, David V (advisor), Davenport, Mark A (advisor), Romberg, Justin (advisor).
Subjects/Keywords: I-vector; Anomalous detection of sounds; Machine sounds classification; SVM; Naive Bayes; One-class SVM
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tanveer, M. (2020). Classification of anomalous machine sounds using i-vectors. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62812
Chicago Manual of Style (16th Edition):
Tanveer, Maham. “Classification of anomalous machine sounds using i-vectors.” 2020. Masters Thesis, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/62812.
MLA Handbook (7th Edition):
Tanveer, Maham. “Classification of anomalous machine sounds using i-vectors.” 2020. Web. 07 Mar 2021.
Vancouver:
Tanveer M. Classification of anomalous machine sounds using i-vectors. [Internet] [Masters thesis]. Georgia Tech; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/62812.
Council of Science Editors:
Tanveer M. Classification of anomalous machine sounds using i-vectors. [Masters Thesis]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62812

Georgia Tech
2.
Vinay, Ashvala.
The sound within: Learning audio features from electroencephalogram recordings of music listening.
Degree: MS, Music, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/62866
► We look at the intersection of music, machine Learning and neuroscience. Specifically, we are interested in understanding how we can predict audio onset events by…
(more)
▼ We look at the intersection of music, machine Learning and neuroscience. Specifically, we are interested in understanding how we can predict audio onset events by using the electroencephalogram response of subjects listening to the same music segment. We present models and approaches to this problem using approaches derived by deep learning. We worked with a highly imbalanced dataset and present methods to solve it - tolerance windows and aggregations. Our presented methods are a feed-forward network, a convolutional neural network (CNN), a recurrent neural network (RNN) and a RNN with a custom unrolling method. Our results find that at a tolerance window of 40 ms, a feed-forward network performed well. We also found that an aggregation of 200 ms suggested promising results, with aggregations being a simple way to reduce model complexity.
Advisors/Committee Members: Leslie, Grace (advisor), Lerch, Alexander (committee member), Romberg, Justin (committee member).
Subjects/Keywords: Music technology; Machine learning; Neuroimaging; EEG; Music information retireval
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Vinay, A. (2020). The sound within: Learning audio features from electroencephalogram recordings of music listening. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62866
Chicago Manual of Style (16th Edition):
Vinay, Ashvala. “The sound within: Learning audio features from electroencephalogram recordings of music listening.” 2020. Masters Thesis, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/62866.
MLA Handbook (7th Edition):
Vinay, Ashvala. “The sound within: Learning audio features from electroencephalogram recordings of music listening.” 2020. Web. 07 Mar 2021.
Vancouver:
Vinay A. The sound within: Learning audio features from electroencephalogram recordings of music listening. [Internet] [Masters thesis]. Georgia Tech; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/62866.
Council of Science Editors:
Vinay A. The sound within: Learning audio features from electroencephalogram recordings of music listening. [Masters Thesis]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62866

Georgia Tech
3.
Witte, Philipp Andre.
Software and algorithms for large-scale seismic inverse problems.
Degree: PhD, Computational Science and Engineering, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/62754
► Seismic imaging and parameter estimation are an import class of inverse problems with practical relevance in resource exploration, carbon control and monitoring systems for geohazards.…
(more)
▼ Seismic imaging and parameter estimation are an import class of inverse problems with practical relevance in resource exploration, carbon control and monitoring systems for geohazards. Seismic inverse problems involve solving a large number of partial differential equations (PDEs) during numerical optimization using finite difference modeling, making them computationally expensive. Additionally, problems of this type are typically ill-posed, non-convex or ill-conditioned, thus making them challenging from a mathematical standpoint as well. Similar to the field of deep learning, this calls for software that is not only optimized for performance, but also enables geophysical domain specialists to experiment with algorithms in high-level programming languages and using different computing environments, such as high-performance computing (HPC) clusters or the cloud. Furthermore, they call for the adaption of dimensionality reduction techniques and stochastic algorithms to address computational cost from the algorithmic side. This thesis makes three distinct contributions to address computational challenges encountered in seismic inverse problems and to facilitate algorithmic development in this field. Part one introduces a large-scale framework for seismic modeling and inversion based on the paradigm of separation of concerns, which combines a user interface based on domain specific abstractions with a Python package for automatic code generation to solve the underlying PDEs. The modular code structure makes it possible to manage the complexity of a seismic inversion code, while matrix-free linear operators and data containers enable the implementation of algorithms in a fashion that closely resembles the underlying mathematical notation. The second contribution of this thesis is an algorithm for seismic imaging, that addresses its high computational cost and large memory imprint through a combination of on-the-fly Fourier transforms, stochastic sampling techniques and sparsity-promoting optimization. The algorithm combines the best of both time- and frequency-domain inversion, as the memory imprint is independent of the number of modeled time steps, while time-to-frequency conversions avoid the need to solve Helmholtz equations, which involve inverting ill-conditioned matrices. Part three of this thesis introduces a novel approach for adapting the cloud for high-performance computing applications like seismic imaging, which does not rely on a fixed cluster of permanently running virtual machines. Instead, computational resources are automatically started and terminated by the cloud environment during runtime and the workflow takes advantage of cloud-native technologies such as event-driven computations and containerized batch processing.
Advisors/Committee Members: Herrmann, Felix J. (advisor), Chow, Edmond (advisor), Vuduc, Richard (advisor), Peng, Zhigang (advisor), Romberg, Justin (advisor).
Subjects/Keywords: Seismic; Algorithm; Cloud; High-performance-computing; Geophysics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Witte, P. A. (2020). Software and algorithms for large-scale seismic inverse problems. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62754
Chicago Manual of Style (16th Edition):
Witte, Philipp Andre. “Software and algorithms for large-scale seismic inverse problems.” 2020. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/62754.
MLA Handbook (7th Edition):
Witte, Philipp Andre. “Software and algorithms for large-scale seismic inverse problems.” 2020. Web. 07 Mar 2021.
Vancouver:
Witte PA. Software and algorithms for large-scale seismic inverse problems. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/62754.
Council of Science Editors:
Witte PA. Software and algorithms for large-scale seismic inverse problems. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62754

Georgia Tech
4.
McCormick, Jackson C.
D region tomography: A technique for ionospheric imaging using lightning-generated sferics and inverse modeling.
Degree: PhD, Electrical and Computer Engineering, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/62300
► The D region of the ionosphere (60-90 km altitude) is a plasma layer which is highly variable on timescales from fractions of a second to…
(more)
▼ The D region of the ionosphere (60-90 km altitude) is a plasma layer which is highly variable on timescales from fractions of a second to many hours and on spatial scales up to many hundreds of kilometers. VLF and LF (3-30 kHz, 30-300 kHz) radio waves are guided to global distances by reflections between the ground and the D region. Therefore, information about the current state of the ionosphere is encoded in received VLF/LF signals. VLF transmitters, for example, have been used in the past for ionospheric remote sensing with ionospheric disturbances manifesting as perturbations in amplitude and/or phase. The return stroke of lightning is an impulsive VLF radiator, but unlike VLF transmitters, lightning flashes are spread broadly in space allowing for much greater spatial coverage of the D region compared to VLF transmitters. Furthermore, sferics provide a broadband spectral advantage over the narrowband transmitters. The challenge is that individual lightning-generated waveforms, or `sferics', vary due to uncertainty in the time/location information, D region ionospheric variability, and the uniqueness of each lightning flash. In part, this thesis describes a technique to mitigate this variability to produce stable high-SNR sferic measurements. Using a propagation model, the received sferics can be used to infer an electron density ionospheric profile that is interpreted as an average along the path from lighting stroke to receiver. We develop a new model for the electron density vs altitude which is a natural extension of the Wait and Spies 2-parameter model. We call this new model the `split' model after the fact that the D region seems to commonly split into two exponentially increasing electron density portions. The split model is described by 4 parameters: h', β, s_ℓ, and Δ h respectively indicating the height, slope, split location, and split magnitude. We introduce the D region tomography algorithm. The path-averaged electron density inferences are related to a 4-dimensional image specified by latitude, longitude, altitude, and time. For a given time window and altitude, we can produce a 2D slice where the electron density is specified everywhere, even where there is not a transmitter-receiver path. Sparse and nonuniform spatial and temporal coverage of the ionosphere leads to artifacts and bias with produced images. We address these problems through sparse optimization techniques and a smoothness constraint using the discrete cosine transform.
Advisors/Committee Members: Cohen, Morris (advisor), Romberg, Justin (advisor), Bibby, Malcolm (committee member), Said, Ryan (committee member), Simon, Sven (committee member).
Subjects/Keywords: D-region; Ionosphere; Lightning; VLF; LF; Sferic; Propagation; Electromagnetics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
McCormick, J. C. (2019). D region tomography: A technique for ionospheric imaging using lightning-generated sferics and inverse modeling. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62300
Chicago Manual of Style (16th Edition):
McCormick, Jackson C. “D region tomography: A technique for ionospheric imaging using lightning-generated sferics and inverse modeling.” 2019. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/62300.
MLA Handbook (7th Edition):
McCormick, Jackson C. “D region tomography: A technique for ionospheric imaging using lightning-generated sferics and inverse modeling.” 2019. Web. 07 Mar 2021.
Vancouver:
McCormick JC. D region tomography: A technique for ionospheric imaging using lightning-generated sferics and inverse modeling. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/62300.
Council of Science Editors:
McCormick JC. D region tomography: A technique for ionospheric imaging using lightning-generated sferics and inverse modeling. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62300

Georgia Tech
5.
Schnaidt Grez, German Augusto.
Non-parametric statistical models using wavelets: Theory and methods.
Degree: PhD, Industrial and Systems Engineering, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/62657
► Machine Learning and Data Analytics have become key tools in the advancement of modern society, with a vast variety of applications exhibiting exponential growth in…
(more)
▼ Machine Learning and Data Analytics have become key tools in the advancement of modern society, with a vast variety of applications exhibiting exponential growth in breadth and depth during the past few years. Moreover, the advancement of data-gathering technologies enables the availability of massive amounts of data, which fuels the opportunity for application and development of new analytics tools to obtain insights, making an effective and efficient use of it. This dissertation aims to contribute to the scientific existing methodologies in this context, with focus in the non-parametric statistical domain due to its robustness to prior modeling assumptions and flexibility of application in many different contexts. In light of this objective, four non-parametric methodologies based on wavelets are introduced and analyzed. Problems such as survival density estimation, non-linear additive regression and multiscale correlation analysis are covered, and each methodology is studied from both a theoretical and pragmatic perspectives. Moreover, a theoretical foundation for each proposed method is developed, and then its applicability and performance are illustrated using simulations studies, real data sets and comparison with previously published results in the field.
Advisors/Committee Members: Vidakovic, Brani (advisor), Xie, Yao (committee member), Paynabar, Kamran (committee member), Goldsman, Dave (committee member), Romberg, Justin (committee member).
Subjects/Keywords: Non parametric statistics; Wavelets
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Schnaidt Grez, G. A. (2019). Non-parametric statistical models using wavelets: Theory and methods. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62657
Chicago Manual of Style (16th Edition):
Schnaidt Grez, German Augusto. “Non-parametric statistical models using wavelets: Theory and methods.” 2019. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/62657.
MLA Handbook (7th Edition):
Schnaidt Grez, German Augusto. “Non-parametric statistical models using wavelets: Theory and methods.” 2019. Web. 07 Mar 2021.
Vancouver:
Schnaidt Grez GA. Non-parametric statistical models using wavelets: Theory and methods. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/62657.
Council of Science Editors:
Schnaidt Grez GA. Non-parametric statistical models using wavelets: Theory and methods. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62657

Georgia Tech
6.
Parihar, Abhinav.
Utilizing switched linear dynamics of interconnected state transition devices for approximating certain global functions.
Degree: PhD, Electrical and Computer Engineering, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/62725
► The objective of the proposed research is to create alternative computing models and architectures, unlike (discrete) sequential Turing machine/Von Neumann style models, which utilize the…
(more)
▼ The objective of the proposed research is to create alternative computing models and architectures, unlike (discrete) sequential Turing machine/Von Neumann style models, which utilize the network dynamics of interconnected IMT (insulator-metal transition) devices. This work focusses on circuits (mainly coupled oscillators) and the resulting switched linear dynamical systems that arise in networks of IMT devices. Electrical characteristics of the devices and their stochasticity are modeled mathematically and used to explain experimentally observed behavior. For certain kinds of connectivity patterns, the steady state limit cycles of these systems encode approximate solutions to global functions like dominant eigenvector of the connectivity matrix and graph coloring of the connectivity graph.
Advisors/Committee Members: Raychowdhury, Arijit (advisor), Romberg, Justin (committee member), Mukhopadhyay, Saibal (committee member), Egerstedt, Magnus (committee member), Datta, Suman (committee member).
Subjects/Keywords: Coupled oscillators; Switched linear dynamics; Eigenvector; Graph coloring
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Parihar, A. (2019). Utilizing switched linear dynamics of interconnected state transition devices for approximating certain global functions. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62725
Chicago Manual of Style (16th Edition):
Parihar, Abhinav. “Utilizing switched linear dynamics of interconnected state transition devices for approximating certain global functions.” 2019. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/62725.
MLA Handbook (7th Edition):
Parihar, Abhinav. “Utilizing switched linear dynamics of interconnected state transition devices for approximating certain global functions.” 2019. Web. 07 Mar 2021.
Vancouver:
Parihar A. Utilizing switched linear dynamics of interconnected state transition devices for approximating certain global functions. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/62725.
Council of Science Editors:
Parihar A. Utilizing switched linear dynamics of interconnected state transition devices for approximating certain global functions. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62725

Georgia Tech
7.
Amaravati, Anvesha.
Energy-efficient circuits and system architectures to enable intelligence at the edge of the cloud.
Degree: PhD, Electrical and Computer Engineering, 2018, Georgia Tech
URL: http://hdl.handle.net/1853/62240
► Internet of Things (IoT) devices are collecting a large amount of data for video processing, monitoring health, etc. Transmitting the data from the sensor to…
(more)
▼ Internet of Things (IoT) devices are collecting a large amount of data for video processing, monitoring health, etc. Transmitting the data from the sensor to the cloud requires a large aggregate bandwidth. The objective of the proposed research is to leverage advances in machine learning to perform in-sensor computation, thus reducing the transmission bandwidth, preserving data privacy and enabling low-power operation. The proposed research demonstrates a system design and IC designs to achieve energy efficiency. As a system prototype, we demonstrate a light-powered always ON gesture recognition system. As circuit innovations, we show voltage and time-based matrix multiplying ADCs (MMADCs), compressive sensing ADCs (CS-ADCs) along with measurement results. The proposed time-based MMADC is digitally synthesizable, can operate at supply as low supply as 0.4V and demonstrates higher energy efficiency compared to the state of the art designs. As SoC innovation, we propose time-based reinforcement learning for the edge computing implemented along with sensors and actuators to demonstrate autonomous obstacle avoidance robot.
Advisors/Committee Members: Raychowdhury, Arijit (advisor), Lim, Sung-Kyu (committee member), Romberg, Justin (committee member), Kim, Hyesoon (committee member), Bowman, Keith (committee member).
Subjects/Keywords: Compressive sensing; Reinforcement learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Amaravati, A. (2018). Energy-efficient circuits and system architectures to enable intelligence at the edge of the cloud. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62240
Chicago Manual of Style (16th Edition):
Amaravati, Anvesha. “Energy-efficient circuits and system architectures to enable intelligence at the edge of the cloud.” 2018. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/62240.
MLA Handbook (7th Edition):
Amaravati, Anvesha. “Energy-efficient circuits and system architectures to enable intelligence at the edge of the cloud.” 2018. Web. 07 Mar 2021.
Vancouver:
Amaravati A. Energy-efficient circuits and system architectures to enable intelligence at the edge of the cloud. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/62240.
Council of Science Editors:
Amaravati A. Energy-efficient circuits and system architectures to enable intelligence at the edge of the cloud. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/62240

Georgia Tech
8.
Al-Hussaini, Irfan.
Interpretable models for automatic sleep stage scoring.
Degree: MS, Electrical and Computer Engineering, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/62851
► This thesis aims to combine domain knowledge with deep learning to develop interpretable yet robust models for a particular clinical decision support system, sleep staging.…
(more)
▼ This thesis aims to combine domain knowledge with deep learning to develop interpretable yet robust models for a particular clinical decision support system, sleep staging. The method is transferable to other areas where domain knowledge can be represented by a set of computational rules. Currently, sleep staging, a cardinal step for evaluating the quality of sleep, is a manual process, done by sleep staging experts who are trained over months. Moreover, it is tedious and complex as it can take the trained expert several hours to annotate just one patient's polysomnogram (PSG) from a single night. As a result, data-driven methods for automating this process have been explored extensively by the research community and deep learning models have demonstrated state-of-the-art performance in automating sleep staging. However, interpretability which defines other desiderata has largely remained unexplored. In this thesis, we propose SLEEPER: interpretable Sleep staging via Prototypes from Expert Rules, a method for automating sleep staging which combines deep learning models with expert-defined rules using a prototype learning framework to generate simple interpretable models. It derives a prototype, which is a representative latent embedding of PSG data fragments, for each sleep scoring rule and expert-defined feature. The inference models are simple and interpretable like a shallow decision tree whose nodes are based on a similarity index with those meaningful rules and features. We evaluate the method using two PSG datasets collected from sleep studies and demonstrate that it can provide accurate sleep stage classification comparable to human experts and deep neural networks with about 85% ROC-AUC and .7 K.
Advisors/Committee Members: Sun, Jimeng (advisor), Inan, Omer T. (advisor), Rozell, Christopher J. (committee member), Romberg, Justin K. (committee member).
Subjects/Keywords: Sleep scoring; Interpretable; Deep learning; CNN; Decision tree; EEG; EOG; EMG
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Al-Hussaini, I. (2020). Interpretable models for automatic sleep stage scoring. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62851
Chicago Manual of Style (16th Edition):
Al-Hussaini, Irfan. “Interpretable models for automatic sleep stage scoring.” 2020. Masters Thesis, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/62851.
MLA Handbook (7th Edition):
Al-Hussaini, Irfan. “Interpretable models for automatic sleep stage scoring.” 2020. Web. 07 Mar 2021.
Vancouver:
Al-Hussaini I. Interpretable models for automatic sleep stage scoring. [Internet] [Masters thesis]. Georgia Tech; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/62851.
Council of Science Editors:
Al-Hussaini I. Interpretable models for automatic sleep stage scoring. [Masters Thesis]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62851

Georgia Tech
9.
Xu, Shaojie.
Machine Learning Algorithm Design for Hardware Performance Optimization.
Degree: PhD, Electrical and Computer Engineering, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/62794
► Machine learning has enabled us to extract and exploit information from collected data. In this thesis, we are particularly interested in how we can apply…
(more)
▼ Machine learning has enabled us to extract and exploit information from collected data. In this thesis, we are particularly interested in how we can apply this powerful tool to enhance the performance of various hardware. The objective of our work is to combine techniques in machine learning, signal processing, and system control for hardware performance optimization. By leveraging collected data to construct a better model for both the hardware and the operating environment, machine learning enables the hardware to operate more power-efficiently, to obtain improved results, and to maintain robust performance against environmental changes. The proposed work targets three aims: (i) design data-driven signal processing algorithms which require fewer measurements taken from the sensor front-end; (ii) develop algorithm-hardware co-design techniques for hardware that performs specific machine learning tasks; (iii) design adaptive hardware control algorithms.
For the first aim,we develop a compressive sensing recovery algorithm which achieves fast recovery speed and high recovery quality based on fewer compressed measurements. For the second aim, we propose a motion gesture recognition algorithm which works directly with video frames captured using compressive sensing techniques. The motion parameters are estimated in the compressed domain, and the estimation
algorithm is implemented in the mixed-signal circuits. We also improve the computational and memory efficiency of existing gesture classifiers. For the third aim, we develop multiple Doherty PA control algorithms based on the bandit frameworks. By incorporating the prior information about the Doherty PA’s characteristics into our algorithm design, we improve learning efficiency and enable the PA to achieve robust and adaptive operations in time-variant environments.
Advisors/Committee Members: Romberg, Justin (advisor), Raychowdhury, Arijit (committee member), Wang, Hua (committee member), Davenport, Mark (committee member), Dyer, Eva (committee member).
Subjects/Keywords: Machine Learning; Algorithm-Hardware; Co-Design; Control; Compressive Sensing; Bandits; Reinforcement Learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xu, S. (2020). Machine Learning Algorithm Design for Hardware Performance Optimization. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62794
Chicago Manual of Style (16th Edition):
Xu, Shaojie. “Machine Learning Algorithm Design for Hardware Performance Optimization.” 2020. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/62794.
MLA Handbook (7th Edition):
Xu, Shaojie. “Machine Learning Algorithm Design for Hardware Performance Optimization.” 2020. Web. 07 Mar 2021.
Vancouver:
Xu S. Machine Learning Algorithm Design for Hardware Performance Optimization. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/62794.
Council of Science Editors:
Xu S. Machine Learning Algorithm Design for Hardware Performance Optimization. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62794

Georgia Tech
10.
Shih, Ping-Chang.
Joint variational camera calibration refinement and 4-D stereo reconstruction applied to oceanic sea states.
Degree: PhD, Electrical and Computer Engineering, 2014, Georgia Tech
URL: http://hdl.handle.net/1853/52320
► In this thesis, an innovative algorithm for improving the accuracy of variational space-time stereoscopic reconstruction of ocean surfaces is presented. The space-time reconstruction method, developed…
(more)
▼ In this thesis, an innovative algorithm for improving the accuracy of variational space-time stereoscopic reconstruction of ocean surfaces is presented. The space-time reconstruction method, developed based on stereo computer vision principles and variational optimization theory, takes videos captured by synchronized cameras as inputs and produces the shape and superficial pattern of an overlapped region of interest as outputs. These outputs are designed to be the minimizers of the variational optimization framework and are dependent on the estimation of the camera parameters. Therefore, from the perspective of computer vision, the proposed algorithm adjusts the estimation of camera parameters to lower the disagreement between the reconstruction and 2-D camera recordings. From a mathematical perspective, since the minimizers of the variational framework are determined by a set of partial differential equations (PDEs), the algorithm modifies the coefficients of the PDEs based on the current numerical
solutions to reduce the minimum of the optimization framework. Our algorithm increases the tolerance to the errors of camera parameters, so the joint operations of our algorithm and the variational reconstruction method can generate accurate space-time models even using videos captured by perturbed cameras as input. This breakthrough prompts the realization of ocean surface reconstruction using videos filmed by remotely-controlled helicopters in the future. A number of techniques, technical or theoretical, are explored to fulfill the development and implementation of the algorithm and relative computation issues. The effectiveness of the proposed algorithm is validated through the statistics applied to real ocean surface reconstructions of data collected from an offshore platform at the Crimean Peninsula, the Black Sea. Moreover, synthetic data generated using computer graphics are customized to simulate various situations that are not recorded in the Crimea dataset for the demonstration of the algorithm.
Advisors/Committee Members: Yezzi, Anthony (advisor), Fedele, Francesco (committee member), Vela, Patricio (committee member), Romberg, Justin (committee member), Dellaert, Frank (committee member), Tannenbaum, Allen (committee member).
Subjects/Keywords: Stereo computer vision; Camera calibration; 3-D reconstruction; Variational
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shih, P. (2014). Joint variational camera calibration refinement and 4-D stereo reconstruction applied to oceanic sea states. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/52320
Chicago Manual of Style (16th Edition):
Shih, Ping-Chang. “Joint variational camera calibration refinement and 4-D stereo reconstruction applied to oceanic sea states.” 2014. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/52320.
MLA Handbook (7th Edition):
Shih, Ping-Chang. “Joint variational camera calibration refinement and 4-D stereo reconstruction applied to oceanic sea states.” 2014. Web. 07 Mar 2021.
Vancouver:
Shih P. Joint variational camera calibration refinement and 4-D stereo reconstruction applied to oceanic sea states. [Internet] [Doctoral dissertation]. Georgia Tech; 2014. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/52320.
Council of Science Editors:
Shih P. Joint variational camera calibration refinement and 4-D stereo reconstruction applied to oceanic sea states. [Doctoral Dissertation]. Georgia Tech; 2014. Available from: http://hdl.handle.net/1853/52320

Georgia Tech
11.
Luo, Chenchi.
Non-uniform sampling: algorithms and architectures.
Degree: PhD, Electrical and Computer Engineering, 2012, Georgia Tech
URL: http://hdl.handle.net/1853/45873
► Modern signal processing applications emerging in telecommunication and instrumentation industries have placed an increasing demand for ADCs with higher speed and resolution. The most fundamental…
(more)
▼ Modern signal processing applications emerging in telecommunication and instrumentation industries have placed an increasing demand for ADCs with higher speed and resolution. The most fundamental challenge in such a progress lies at the heart of the classic signal processing: the Shannon-Nyquist sampling theorem which stated that when sampled uniformly, there is no way to increase the upper frequency in the signal spectrum and still unambiguously represent the signal except by raising the sampling rate. This thesis is dedicated to the exploration of the ways to break through the Shannon-Nyquist sampling rate by applying non-uniform sampling techniques.
Time interleaving is probably the most intuitive way to parallel the uniform sampling process in order to achieve a higher sampling rate. Unfortunately, the channel mismatches in the TIADC system make the system an instance of a recurrent non-uniform sampling system whose non-uniformities are detrimental to the performance of the system and need to be calibrated. Accordingly, this thesis proposed a flexible and efficient architecture to compensate for the channel mismatches in the TIADC system. As a key building block in the calibration architecture, the design of the Farrow structured adjustable fractional delay filter has been investigated in detail. A new modified Farrow structure is proposed to design the adjustable FD filters that are optimized for a given range of bandwidth and fractional delays. The application of the Farrow structure is not limited to the design of adjustable fractional delay filters. It can also be used to implement adjustable lowpass, highpass and bandpass filters as well as adjustable multirate filters. This thesis further extends the Farrow structure to the design of filters with adjustable polynomial phase responses.
Inspired by the theory of compressive sensing, another contribution of this thesis is to use randomization as a means to overcome the limit of the Nyquist rate. This thesis investigates the impact of random sampling intervals or jitters on the power spectrum of the sampled signal. It shows that the aliases of the original signal can be well shaped by choosing an appropriate probability distribution of the sampling intervals or jitters such that aliases can be viewed as a source of noise in the signal power spectrum. A new theoretical framework has been established to associate the probability mass function of the random sampling intervals or jitters with the aliasing shaping effect. Based on the theoretical framework, this thesis proposes three random sampling architectures, i.e., SAR ADC, ramp ADC and level crossing ADC, that can be easily implemented based on the corresponding standard ADC architectures. Detailed models and simulations are established to verify the effectiveness of the proposed architectures. A new reconstruction algorithm called the successive sine matching pursuit has also been proposed to recover a class of spectrally sparse signals from a sparse set of non-uniform samples onto a denser uniform time…
Advisors/Committee Members: McClellan, James (Committee Chair), Romberg, Justin (Committee Chair), Anderson, David (Committee Member), Davenport, Mark (Committee Member), Vempala, Santosh (Committee Member).
Subjects/Keywords: TIADC; Farrow structure; Compressive sensing; Sparsity; Analog-to-digital converters; Sampling (Statistics); Algorithms; Signal processing; Signal processing Digital techniques
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Luo, C. (2012). Non-uniform sampling: algorithms and architectures. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/45873
Chicago Manual of Style (16th Edition):
Luo, Chenchi. “Non-uniform sampling: algorithms and architectures.” 2012. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/45873.
MLA Handbook (7th Edition):
Luo, Chenchi. “Non-uniform sampling: algorithms and architectures.” 2012. Web. 07 Mar 2021.
Vancouver:
Luo C. Non-uniform sampling: algorithms and architectures. [Internet] [Doctoral dissertation]. Georgia Tech; 2012. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/45873.
Council of Science Editors:
Luo C. Non-uniform sampling: algorithms and architectures. [Doctoral Dissertation]. Georgia Tech; 2012. Available from: http://hdl.handle.net/1853/45873

Georgia Tech
12.
Nichols, Brendan.
Exploiting ambient noise for coherent processing of mobile vector sensor arrays.
Degree: PhD, Mechanical Engineering, 2018, Georgia Tech
URL: http://hdl.handle.net/1853/59897
► A network of mobile sensors, such as vector sensors mounted to drifting floats, can be used as an array for locating acoustic sources in an…
(more)
▼ A network of mobile sensors, such as vector sensors mounted to drifting floats, can be used as an array for locating acoustic sources in an ocean environment. Accurate localization using coherent processing on such an array dictates the locations of sensor elements must be well-known. In many cases, a mobile, submerged array cannot meet this requirement, however the presence of ambient acoustic noise provides an opportunity to correct sensor location errors. It has been previously shown that ambient noise correlations across separated, fixed hydrophones can provide the separation distance between them (K. G. Sabra et al., 2005, IEEE J. Ocean Engineering, Vol. 30). A time-domain framework for this method is presented for the case of vector sensors in isotropic ambient noise to quantify their gain relative to traditional hydrophone correlations. Furthermore, a novel method is presented for identifying hidden ambient noise correlation peaks when the separation distance is changing, and its accuracy is found to match that of GPS. Lastly, a novel weighted coherent processing algorithm is presented and its performance compared to traditional methods, finding increased localization precision even in the presence of severe noise. This method is applied to locating a source, and succeeds using both GPS and ambient-noise-corrected sensor locations. All experimental data used in these studies were collected from a novel vector sensor array, and details of its design and deployment are presented as well.
Advisors/Committee Members: Sabra, Karim (advisor), Trivett, David (committee member), Meaud, Julien (committee member), Arvanitis, Costas (committee member), Romberg, Justin (committee member).
Subjects/Keywords: Ambient noise; Coherent processing; Beamforming; Vector sensor; Array signal processing; Stochastic search; Experimental data; Noise correlation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Nichols, B. (2018). Exploiting ambient noise for coherent processing of mobile vector sensor arrays. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/59897
Chicago Manual of Style (16th Edition):
Nichols, Brendan. “Exploiting ambient noise for coherent processing of mobile vector sensor arrays.” 2018. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/59897.
MLA Handbook (7th Edition):
Nichols, Brendan. “Exploiting ambient noise for coherent processing of mobile vector sensor arrays.” 2018. Web. 07 Mar 2021.
Vancouver:
Nichols B. Exploiting ambient noise for coherent processing of mobile vector sensor arrays. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/59897.
Council of Science Editors:
Nichols B. Exploiting ambient noise for coherent processing of mobile vector sensor arrays. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/59897

Georgia Tech
13.
Whitaker, Bradley M.
Modifying sparse coding to model imbalanced datasets.
Degree: PhD, Electrical and Computer Engineering, 2018, Georgia Tech
URL: http://hdl.handle.net/1853/59919
► The objective of this research is to explore the use of sparse coding as a tool for unsupervised feature learning to more effectively model imbalanced…
(more)
▼ The objective of this research is to explore the use of sparse coding as a tool for unsupervised feature learning to more effectively model imbalanced datasets. Traditional sparse coding dictionaries are learned by minimizing the average approximation error between a vector and its sparse decomposition. As such, these dictionaries may overlook important features that occur infrequently in the data. Without these features, it may be difficult to accurately classify between classes if one or more classes are not well-represented in the training data. To overcome this problem, this work explores novel modifications to the sparse coding dictionary learning framework that encourage dictionaries to learn anomalous features. Sparse coding also inherently assumes that a vector can be represented as a sparse linear combination of a feature set. This work addresses the ability of sparse coding to learn a representative dictionary when the underlying data has a nonlinear sparse structure. Finally, this work illustrates one benefit of improved signal modeling by utilizing sparse coding in three imbalanced classification tasks.
Advisors/Committee Members: Anderson, David V. (advisor), Rozell, Christopher J. (committee member), Romberg, Justin K. (committee member), Li, Wing (committee member), Clifford, Gari D. (committee member).
Subjects/Keywords: Sparse coding; Imbalanced data; Machine learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Whitaker, B. M. (2018). Modifying sparse coding to model imbalanced datasets. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/59919
Chicago Manual of Style (16th Edition):
Whitaker, Bradley M. “Modifying sparse coding to model imbalanced datasets.” 2018. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/59919.
MLA Handbook (7th Edition):
Whitaker, Bradley M. “Modifying sparse coding to model imbalanced datasets.” 2018. Web. 07 Mar 2021.
Vancouver:
Whitaker BM. Modifying sparse coding to model imbalanced datasets. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/59919.
Council of Science Editors:
Whitaker BM. Modifying sparse coding to model imbalanced datasets. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/59919

Georgia Tech
14.
Zafar, Munzir.
Whole body control of wheeled inverted pendulum humanoids.
Degree: PhD, Electrical and Computer Engineering, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/61739
► A framework for controlling a Wheeled Inverted Pendulum (WIP) Humanoid to perform useful interactions with the environment, while dynamically balancing itself on two wheels, was…
(more)
▼ A framework for controlling a Wheeled Inverted Pendulum (WIP) Humanoid to perform useful interactions with the environment, while dynamically balancing itself on two wheels, was proposed. As humanoid platforms are characterized by several degrees of freedom, they have the ability to perform several tasks simultaneously, while obeying constraints on their motion and control. This problem is referred as Whole-Body Control in the wider humanoid literature. We develop a framework for whole-body control of WIP humanoids that can be applied directly on the physical robot, which means that it can be made robust to modeling errors. The proposed approach is hierarchical with a low level controller responsible for controlling the manipulator/body and a high-level controller that defines center of mass targets for the low-level controller to control zero dynamics of the system driving the wheels. The low-level controller plans for shorter horizons while considering more complete dynamics of the system, while the high-level controller plans for longer horizon based on an approximate model of the robot for computational efficiency.
Advisors/Committee Members: Hutchinson, Seth (advisor), Theodorou, Evangelos A. (committee member), Boots, Byron E. (committee member), Christensen, Henrik I. (committee member), Romberg, Justin (committee member).
Subjects/Keywords: Whole body control; Wheeled inverted pendulum; Humanoids; Hierarchical; Optimization; Operational space; Model predictive control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zafar, M. (2019). Whole body control of wheeled inverted pendulum humanoids. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61739
Chicago Manual of Style (16th Edition):
Zafar, Munzir. “Whole body control of wheeled inverted pendulum humanoids.” 2019. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/61739.
MLA Handbook (7th Edition):
Zafar, Munzir. “Whole body control of wheeled inverted pendulum humanoids.” 2019. Web. 07 Mar 2021.
Vancouver:
Zafar M. Whole body control of wheeled inverted pendulum humanoids. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/61739.
Council of Science Editors:
Zafar M. Whole body control of wheeled inverted pendulum humanoids. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61739

Georgia Tech
15.
Xia, Dong.
Statistical inference for large matrices.
Degree: PhD, Mathematics, 2016, Georgia Tech
URL: http://hdl.handle.net/1853/55632
► This thesis covers two topics on matrix analysis and estimation in machine learning and statistics. The first topic is about density matrix estimation with application…
(more)
▼ This thesis covers two topics on matrix analysis and estimation in machine learning and statistics. The first topic is about density matrix estimation with application in quantum state tomography. The density matrices are positively semi-definite Hermitian matrices of unit trace that describe the state of a quantum system. We develop minimax lower bounds on error rates of estimation of low rank density matrices in trace regression models used in quantum state tomography (in particular, in the case of Pauli measurements) with explicit dependence of the bounds on the rank and other complexity parameters. Such bounds are established for several statistically relevant distances, including quantum versions of Kullback-Leibler divergence (relative entropy distance) and of Hellinger distance (so called Bures distance), and Schatten p-norm distances. Sharp upper bounds and oracle inequalities for least squares estimator with von Neumann entropy penalization are obtained showing that minimax lower bounds are attained (up to logarithmic factors) for these distances.
Another topic is about the analysis of the spectral perturbations of matrices under Gaussian noise. Given a matrix contaminated with Gaussian noise, we develop sharp upper bounds on the perturbation of linear forms of singular vectors. In particular, sharp upper bounds are proved for the component-wise perturbation of singular vectors. These results can be applied on sub-matrices localization and spectral clustering algorithms.
Advisors/Committee Members: Koltchinskii, Vladimir (advisor), Lounici, Karim (committee member), Romberg, Justin (committee member), Tetali, Prasad (committee member), Song, Le (committee member).
Subjects/Keywords: Low rank; Matrix estimation; Singular vectors; Random perturbation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xia, D. (2016). Statistical inference for large matrices. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/55632
Chicago Manual of Style (16th Edition):
Xia, Dong. “Statistical inference for large matrices.” 2016. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/55632.
MLA Handbook (7th Edition):
Xia, Dong. “Statistical inference for large matrices.” 2016. Web. 07 Mar 2021.
Vancouver:
Xia D. Statistical inference for large matrices. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/55632.
Council of Science Editors:
Xia D. Statistical inference for large matrices. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/55632

Georgia Tech
16.
Abdi, Afshin.
Distributed learning and inference in deep models.
Degree: PhD, Electrical and Computer Engineering, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/63671
► In recent years, the size of deep learning problems has been increased significantly, both in terms of the number of available training samples as well…
(more)
▼ In recent years, the size of deep learning problems has been increased significantly, both in terms of the number of available training samples as well as the number of parameters and complexity of the model. In this thesis, we considered the challenges encountered in training and inference of large deep models, especially on nodes with limited computational power and capacity. We studied two classes of related problems; 1) distributed training of deep models, and 2) compression and restructuring of deep models for efficient distributed and parallel execution to reduce inference times. Especially, we considered the communication bottleneck in distributed training and inference of deep models. Data compression is a viable tool to mitigate the communication bottleneck in distributed deep learning. However, the existing methods suffer from a few drawbacks, such as the increased variance of stochastic gradients (SG), slower convergence rate, or added bias to SG. In my Ph.D. research, we have addressed these challenges from three different perspectives: 1) Information Theory and the CEO Problem, 2) Indirect SG compression via Matrix Factorization, and 3) Quantized Compressive Sampling. We showed, both theoretically and via simulations, that our proposed methods can achieve smaller MSE than other unbiased compression methods with fewer communication bit-rates, resulting in superior convergence rates. Next, we considered federated learning over wireless multiple access channels (MAC). Efficient communication requires the communication algorithm to satisfy the constraints imposed by the nodes in the network and the communication medium. To satisfy these constraints and take advantage of the over-the-air computation inherent in MAC, we proposed a framework based on random linear coding and developed efficient power management and channel usage techniques to manage the trade-offs between power consumption and communication bit-rate. In the second part of this thesis, we considered the distributed parallel implementation of an already-trained deep model on multiple workers. Since latency due to the synchronization and data transfer among workers adversely affects the performance of the parallel implementation, it is desirable to have minimum interdependency among parallel sub-models on the workers. To achieve this goal, we developed and analyzed RePurpose, an efficient algorithm to rearrange the neurons in the neural network and partition them (without changing the general topology of the neural network) such that the interdependency among sub-models is minimized under the computations and communications constraints of the workers.
Advisors/Committee Members: Fekri, Faramarz (advisor), AlRegib, Ghassan (committee member), Romberg, Justin (committee member), Bloch, Matthieu (committee member), Maguluri, Siva Theja (committee member).
Subjects/Keywords: Machine learning; Artificial intelligence; Distributed training; Distributed learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Abdi, A. (2020). Distributed learning and inference in deep models. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/63671
Chicago Manual of Style (16th Edition):
Abdi, Afshin. “Distributed learning and inference in deep models.” 2020. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/63671.
MLA Handbook (7th Edition):
Abdi, Afshin. “Distributed learning and inference in deep models.” 2020. Web. 07 Mar 2021.
Vancouver:
Abdi A. Distributed learning and inference in deep models. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/63671.
Council of Science Editors:
Abdi A. Distributed learning and inference in deep models. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/63671

Georgia Tech
17.
Lee, John Zhan Yi.
Exploiting Low-dimensional Structure and Optimal Transport for Tracking and Alignment.
Degree: PhD, Electrical and Computer Engineering, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/64011
► The objective of this thesis is to exploit low-dimensional structures (e.g., sparsity, low-rankness) and optimal transport theory to develop new tools for inference and distribution…
(more)
▼ The objective of this thesis is to exploit low-dimensional structures (e.g., sparsity, low-rankness) and optimal transport theory to develop new tools for inference and distribution alignment problems. We investigate properties of structure at two scales: local structure of the single datum along the temporal continuum, and global structure across the dataset's entirety. To study local notions of structure, we consider the fundamental problem of support mismatch under the framework of signal inference: inference suffers when the signal support is poorly estimated. Popular metrics (e.g., Lp-norms) are particularly prone to mismatch due to its lack of machinery to describe geometric correlations between support locations. To fill this gap, we exploit optimal transport theory to propose regularizers that explicitly incorporate geometry. To realize such regularizers at scale, we develop efficient methods to overcome the traditionally-prohibitive computational costs of computing optimal transport. To understand global notions of structure, we consider the challenging problem of distribution alignment, which spans fields of machine learning, computer vision, and graph matching. To bypass the intractability of graph matching approaches, we approach this problem from a machine learning perspective and exploit statistical advantages of optimal transport to align distributions. We develop methods that incorporate manifold and cluster structures that are necessary to regularize against convergence to poor local-minima, and demonstrate the superiority of our method on synthetic and real data. Finally, we present pioneering results in cluster-based alignability analysis, which gives us theoretical conditions when datasets can be aligned, as well as error bounds when the alignment transformation is constrained to be isometric.
Advisors/Committee Members: Rozell, Christopher J (advisor), Romberg, Justin K (committee member), Dyer, Eva L (committee member), Davenport, Mark A (committee member), Forest, Craig (committee member).
Subjects/Keywords: inverse problems; optimal transport; tracking; distribution alignment; optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lee, J. Z. Y. (2019). Exploiting Low-dimensional Structure and Optimal Transport for Tracking and Alignment. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/64011
Chicago Manual of Style (16th Edition):
Lee, John Zhan Yi. “Exploiting Low-dimensional Structure and Optimal Transport for Tracking and Alignment.” 2019. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/64011.
MLA Handbook (7th Edition):
Lee, John Zhan Yi. “Exploiting Low-dimensional Structure and Optimal Transport for Tracking and Alignment.” 2019. Web. 07 Mar 2021.
Vancouver:
Lee JZY. Exploiting Low-dimensional Structure and Optimal Transport for Tracking and Alignment. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/64011.
Council of Science Editors:
Lee JZY. Exploiting Low-dimensional Structure and Optimal Transport for Tracking and Alignment. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/64011

Georgia Tech
18.
Friedlander, Robert Daniel.
Thin Lens-Based Geometric Surface Inversion for Multiview Stereo.
Degree: PhD, Electrical and Computer Engineering, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/64140
► Current state-of-the-art multiview reconstruction methods are founded on a pinhole camera model that assumes perfectly focused images and thus fail when given defocused image data.…
(more)
▼ Current state-of-the-art multiview reconstruction methods are founded on a pinhole camera model that assumes perfectly focused images and thus fail when given defocused image data. To overcome this, a fully generative algorithm for the reconstruction of dense three-dimensional shapes under varying viewpoints and levels of focus is developed using a thin lens model which is able to accurately model defocus blur in images. While easily stated, this requires a significant mathematical reformulation from the bottom up as the simple perspective projection assumed by the pinhole model and utilized by current methods no longer applies under the more general thin lens model. New expressions for the forward modeling of image formation as well as model inversion are developed. For the former, image irradiance is related to scene radiance using energy conservation, and the resulting integral expression has a closed-form solution for in-focus points that is shown to be more general and accurate than the one used in current methods. For the latter, the sensitivities of image irradiance to perturbations in both the scene radiance and geometry are analyzed, and the necessary gradient descent evolution equations are extracted from these sensitivities. A variational surface evolution algorithm is then formed where image estimates generated by the thin lens forward model are compared to the actual measured images, and the resulting pixel-wise error is then fed into the evolution equations to update the surface shape and scene radiance estimates. This algorithm is experimentally validated for the case of piecewise-constant scene radiance on both computer-generated and real images, and it is seen that this new method is able to accurately reconstruct sharp object features from even severely defocused images and has an increased robustness to noise compared to pinhole-based methods.
Advisors/Committee Members: Yezzi, Anthony J (advisor), Vela, Patricio (committee member), Dellaert, Frank (committee member), Romberg, Justin (committee member), Kang, Sung Ha (committee member).
Subjects/Keywords: Multiview reconstruction; Variational methods; Thin lens model; Image irradiance
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Friedlander, R. D. (2020). Thin Lens-Based Geometric Surface Inversion for Multiview Stereo. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/64140
Chicago Manual of Style (16th Edition):
Friedlander, Robert Daniel. “Thin Lens-Based Geometric Surface Inversion for Multiview Stereo.” 2020. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/64140.
MLA Handbook (7th Edition):
Friedlander, Robert Daniel. “Thin Lens-Based Geometric Surface Inversion for Multiview Stereo.” 2020. Web. 07 Mar 2021.
Vancouver:
Friedlander RD. Thin Lens-Based Geometric Surface Inversion for Multiview Stereo. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/64140.
Council of Science Editors:
Friedlander RD. Thin Lens-Based Geometric Surface Inversion for Multiview Stereo. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/64140

Georgia Tech
19.
Srinivasa, Rakshith.
Subspace learning by randomized sketching.
Degree: PhD, Electrical and Computer Engineering, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/64177
► High dimensional data is often accompanied by inherent low dimensionality that can be leveraged to design scalable machine learning and signal processing algorithms. Developing efficient…
(more)
▼ High dimensional data is often accompanied by inherent low dimensionality that can be leveraged to design scalable machine learning and signal processing algorithms. Developing efficient computational frameworks that take advantage of the underlying structure in the data is crucial. In this thesis, we consider a particular form of inherent low dimensionality in data: subspace models. In many applications, data is known to lie close to a low dimensional subspace. The underlying subspace itself may or may not be known a priori. Incorporating this structure into data acquisition systems and algorithms can aid in scalability. We first consider two specific applications in the field of array signal processing where subspace priors on the data are commonly used. For both these applications, we develop algorithms that require a number of measurements that scale with only the dimension of the underlying subspace. In doing so, we show that arrays demand dimensionality reduction maps that can operate on individual subsets or blocks of data at a time, without having access to other blocks. Inspired by such block constraints, we consider more general problems in numerical linear algebra where data has a natural partition into blocks. This is common in applications with distributed or decentralized data. We study the problems of sketched ridge regression and sketched matrix multiplication under this constraint and give sample optimal theoretical guarantees on block diagonal sketching matrices. Extending the block model to low-rank matrix sensing, we then study the problem of recovering a low-rank matrix from compressed observations of each column. While each column itself is compressed to a point that is beyond recovery, we leverage their joint structure to recover the matrix as a whole. To do so, we establish a new framework to design estimators of low-rank matrices that obey the constraints imposed by different observation models. Finally, we extend our framework of designing low rank matrix estimators to the application of blind deconvolution. We provide a novel estimator that enjoys uniform recovery guarantees over the entire signal class while being sample optimal.
Advisors/Committee Members: Romberg, Justin (advisor), Davenport, Mark (committee member), Lee, Kiryung (committee member), Koltchinskii, Vladimir (committee member), Vempala, Santosh (committee member).
Subjects/Keywords: Subspace learning; randomized numerical linear algebra; array processing; sketching
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Srinivasa, R. (2020). Subspace learning by randomized sketching. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/64177
Chicago Manual of Style (16th Edition):
Srinivasa, Rakshith. “Subspace learning by randomized sketching.” 2020. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/64177.
MLA Handbook (7th Edition):
Srinivasa, Rakshith. “Subspace learning by randomized sketching.” 2020. Web. 07 Mar 2021.
Vancouver:
Srinivasa R. Subspace learning by randomized sketching. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/64177.
Council of Science Editors:
Srinivasa R. Subspace learning by randomized sketching. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/64177
20.
Shaban, Fahad.
Application of L1 reconstruction of sparse signals to ambiguity resolution in radar.
Degree: MS, Electrical and Computer Engineering, 2013, Georgia Tech
URL: http://hdl.handle.net/1853/47637
► The objective of the proposed research is to develop a new algorithm for range and Doppler ambiguity resolution in radar detection data using L1 minimization…
(more)
▼ The objective of the proposed research is to develop a new algorithm for range and Doppler ambiguity resolution in radar detection data using L1 minimization methods for sparse signals and to investigate the properties of such techniques. This novel approach to ambiguity resolution makes use of the sparse measurement structure of the post-detection data in multiple pulse repetition frequency radars and the resulting equivalence of the computationally intractable L0 minimization and the surrogate L1 minimization methods. The ambiguity resolution problem is cast as a linear system of equations which is then solved for the unique sparse solution in the absence of errors. It is shown that the new technique successfully resolves range and Doppler ambiguities and the recovery is exact in the ideal case of no errors in the system. The behavior of the technique is then investigated in the presence of real world data errors encountered in radar measurement and detection process. Examples of such errors include blind zone effects, collisions, false alarms and missed detections. It is shown that the mathematical model consisting of a linear system of equations developed for the ideal case can be adjusted to account for data errors. Empirical results show that the L1 minimization approach also works well in the presence of errors with minor extensions to the algorithm. Several examples are presented to demonstrate the successful implementation of the new technique for range and Doppler ambiguity resolution in pulse Doppler radars.
Advisors/Committee Members: Richards, Mark (Committee Chair), Lanterman, Aaron (Committee Member), Romberg, Justin (Committee Member).
Subjects/Keywords: Ambiguity resolution; Pulse Doppler radars; Sparse reconstruction; Multiple PRFs; L1 minimization; Radar; Signal detection; Sparse matrices
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shaban, F. (2013). Application of L1 reconstruction of sparse signals to ambiguity resolution in radar. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/47637
Chicago Manual of Style (16th Edition):
Shaban, Fahad. “Application of L1 reconstruction of sparse signals to ambiguity resolution in radar.” 2013. Masters Thesis, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/47637.
MLA Handbook (7th Edition):
Shaban, Fahad. “Application of L1 reconstruction of sparse signals to ambiguity resolution in radar.” 2013. Web. 07 Mar 2021.
Vancouver:
Shaban F. Application of L1 reconstruction of sparse signals to ambiguity resolution in radar. [Internet] [Masters thesis]. Georgia Tech; 2013. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/47637.
Council of Science Editors:
Shaban F. Application of L1 reconstruction of sparse signals to ambiguity resolution in radar. [Masters Thesis]. Georgia Tech; 2013. Available from: http://hdl.handle.net/1853/47637
21.
Colón, Guillermo J.
Avian musing feature space analysis.
Degree: MS, Electrical and Computer Engineering, 2012, Georgia Tech
URL: http://hdl.handle.net/1853/44754
► The purpose of this study was to analyze the possibility of utilizing known signal processing and machine learning algorithms to correlate environmental data to chicken…
(more)
▼ The purpose of this study was to analyze the possibility of utilizing known
signal processing and machine learning algorithms to correlate environmental
data to chicken vocalizations. The specific musing to be analyzed consist of
not just one chicken's vocalizations but of a whole collective, it therefore
becomes a chatter problem. There have been similar attempts to create such a
correlation in the past but with singled out birds instead of a multitude. This
study was performed on broiler chickens (birds used in meat production).
One of the reasons why this correlation is useful is for the purpose of an
automated control system. Utilizing the chickens own vocalization to determine
the temperature, the humidity, the levels of ammonia among other environmental
factors, reduces, and might even remove, the need for sophisticated sensors.
Another factor that this study wanted to correlate was stress in the chickens
to their vocalization. This has great implications in animal welfare, to
guarantee that the animals are being properly take care off. Also, it has been
shown that the meat of non-stressed chickens is of much better quality than the
opposite.
The audio was filtered and certain features were extracted to predict stress.
The features considered were loudness, spectral centroid, spectral sparsity,
temporal sparsity, transient index, temporal average, temporal standard
deviation, temporal skewness, and temporal kurtosis.
In the end, out of all the features analyzed it was shown that the kurtosis
and loudness proved to be the best features for identifying stressed birds in
audio.
Advisors/Committee Members: Anderson, David (Committee Chair), Romberg, Justin (Committee Member), Vela, Patricio (Committee Member).
Subjects/Keywords: Broiler; Chicken; Feature; Audio; Segmentation; Signal processing Digital techniques; Machine learning; Sound production by animals; Chickens Vocalization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Colón, G. J. (2012). Avian musing feature space analysis. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/44754
Chicago Manual of Style (16th Edition):
Colón, Guillermo J. “Avian musing feature space analysis.” 2012. Masters Thesis, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/44754.
MLA Handbook (7th Edition):
Colón, Guillermo J. “Avian musing feature space analysis.” 2012. Web. 07 Mar 2021.
Vancouver:
Colón GJ. Avian musing feature space analysis. [Internet] [Masters thesis]. Georgia Tech; 2012. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/44754.
Council of Science Editors:
Colón GJ. Avian musing feature space analysis. [Masters Thesis]. Georgia Tech; 2012. Available from: http://hdl.handle.net/1853/44754
22.
Lani, Shane W.
Passive acoustic imaging and monitoring using ambient noise.
Degree: MS, Mechanical Engineering, 2012, Georgia Tech
URL: http://hdl.handle.net/1853/50136
► An approximate of the Green's function can be obtained by taking the cross-correlation of ambient noise that has been simultaneously recorded on separate sensors. This…
(more)
▼ An approximate of the Green's function can be obtained by taking the cross-correlation of ambient noise that has been simultaneously recorded on separate sensors. This method is applied for two experiments, which illustrate the advantages and challenges of this technique. The first experiment is in the ultrasonic regime [5-30] MHz and uses capacitive micromachined ultrasonic transducer arrays to image the near field and compares the passive imaging to the conventional pulse-echo imaging. Both the array and target are immersed in a fluid with the sensors recording the fluid's random thermal-mechanical motion as the ambient noise. The second experiment is a passive ocean monitoring experiment, which uses spatiotemporal filtering to rapidly extract coherent arrivals between two vertical line arrays. In this case the ambient noise in the frequency band [250 1500] Hz is dominated by non-stationary shipping noise. For imaging purposes, the cross-correlation needs to extract the Green's function so that the imaging can be done correctly. While for monitoring purposes, the important feature is the change in arrivals, which corresponds to the environment changing. Results of both experiments are presented along with the advantages of this passive method over the more accepted active methods.
Advisors/Committee Members: Sabra, Karim (advisor), Degertekin, F. Levent (committee member), Romberg, Justin (committee member).
Subjects/Keywords: Ultrasound imaging; Noise imaging; Ocean monitoring; Noise; Cross-correlation; Noise monitoring; Ambient sounds; Green's functions
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lani, S. W. (2012). Passive acoustic imaging and monitoring using ambient noise. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/50136
Chicago Manual of Style (16th Edition):
Lani, Shane W. “Passive acoustic imaging and monitoring using ambient noise.” 2012. Masters Thesis, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/50136.
MLA Handbook (7th Edition):
Lani, Shane W. “Passive acoustic imaging and monitoring using ambient noise.” 2012. Web. 07 Mar 2021.
Vancouver:
Lani SW. Passive acoustic imaging and monitoring using ambient noise. [Internet] [Masters thesis]. Georgia Tech; 2012. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/50136.
Council of Science Editors:
Lani SW. Passive acoustic imaging and monitoring using ambient noise. [Masters Thesis]. Georgia Tech; 2012. Available from: http://hdl.handle.net/1853/50136
23.
Somoye, Idris Olansile.
GPU accelerated adaptive compressed sensing.
Degree: MS, Electrical and Computer Engineering, 2016, Georgia Tech
URL: http://hdl.handle.net/1853/56379
► There are presently image sensors based around compressed sensing that apply the fundamental theory to video acquisition; however, these imagers require specialized hardware modules that…
(more)
▼ There are presently image sensors based around compressed sensing that apply the fundamental theory to video acquisition; however, these imagers require specialized hardware modules that are not widely available and therefore are not currently practical for video sensing. To deliver a practical image sensor that applies compressive sensing, I propose an imaging system based on a GPU and an off-the-shelf conventional image sensor that takes advantage of parallel computations for efficient transforming of data to the compressed sensing domain. This imaging system, by taking advantage of GPU processing along with straightforward communication methods between the host and the GPU, easily accommodates algorithms that rapidly change the sensing basis, making compressed sensing more applicable despite the general lack of hardware. Measurement results show that the GPU based compressive sensing imaging system delivers a viable and practical imager that is able to quickly compress images, providing a real-time video encoder for low power systems.
Advisors/Committee Members: Chatterjee, Abhijit (advisor), Raychowdhury, Arijit (committee member), Romberg, Justin (committee member).
Subjects/Keywords: GPU; Compressed sensing; GPGPU; Predictive video encoding
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Somoye, I. O. (2016). GPU accelerated adaptive compressed sensing. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/56379
Chicago Manual of Style (16th Edition):
Somoye, Idris Olansile. “GPU accelerated adaptive compressed sensing.” 2016. Masters Thesis, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/56379.
MLA Handbook (7th Edition):
Somoye, Idris Olansile. “GPU accelerated adaptive compressed sensing.” 2016. Web. 07 Mar 2021.
Vancouver:
Somoye IO. GPU accelerated adaptive compressed sensing. [Internet] [Masters thesis]. Georgia Tech; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/56379.
Council of Science Editors:
Somoye IO. GPU accelerated adaptive compressed sensing. [Masters Thesis]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/56379
24.
Wong, Lok S.
Optimal partitions for the fast multipole method.
Degree: MS, Electrical and Computer Engineering, 2016, Georgia Tech
URL: http://hdl.handle.net/1853/56360
► The fast multipole method is an algorithm first developed to approximately solve the N-body problem in linear time. Part of the FMM involves recursively partitioning…
(more)
▼ The fast multipole method is an algorithm first developed to approximately solve the N-body problem in linear time. Part of the FMM involves recursively partitioning a region of source points into cells. Insight from studying lattices and covering problems leads to new, more efficient partitions for the FMM. New partitions are designed to reduce near-field and far-field calculations. Results from simulations show significant computation time reduction with little to no additional error in many cases.
Advisors/Committee Members: Barnes, Christopher (advisor), Romberg, Justin (committee member), Lanterman, Aaron (committee member).
Subjects/Keywords: Fast multipole method; Partition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wong, L. S. (2016). Optimal partitions for the fast multipole method. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/56360
Chicago Manual of Style (16th Edition):
Wong, Lok S. “Optimal partitions for the fast multipole method.” 2016. Masters Thesis, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/56360.
MLA Handbook (7th Edition):
Wong, Lok S. “Optimal partitions for the fast multipole method.” 2016. Web. 07 Mar 2021.
Vancouver:
Wong LS. Optimal partitions for the fast multipole method. [Internet] [Masters thesis]. Georgia Tech; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/56360.
Council of Science Editors:
Wong LS. Optimal partitions for the fast multipole method. [Masters Thesis]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/56360
25.
Kagie, Matthew Joseph.
Time-of-arrival estimation for saturated optical transients using censored probabilistic models.
Degree: MS, Electrical and Computer Engineering, 2016, Georgia Tech
URL: http://hdl.handle.net/1853/56365
► The objective of the proposed research is to estimate the time-of-arrival of a transient optical signal subjected to a particular type of nonlinear distortion. The…
(more)
▼ The objective of the proposed research is to estimate the time-of-arrival of a transient optical signal subjected to a particular type of nonlinear distortion. The limited dynamic range of optical sensors can result in nonlinear distortion when measuring extreme transient events, such as lightning. To deal with saturated signals, we employ censored probabilistic models to develop maximum-likelihood procedures for estimating the time-of-arrival of lightning strikes, along with associated nuisance parameters. The received signal is modeled as a realization of a Poisson point process characterized by parametric models of a lightning strike's time-varying intensity. The models are extracted from the FORTÉ lighting database via machine learning techniques. Using Monte Carlo simulations, we compare the variances of different algorithms as a function of signal magnitude and saturation threshold. We also compare these variances to analytical performance bounds such as the Cramér-Rao lower bound.
Advisors/Committee Members: Lanterman, Aaron (advisor), Romberg, Justin (committee member), Citrin, David (committee member).
Subjects/Keywords: Expectation maximization algorithms; Poisson models; Cramér-Rao bounds; Lightning; Machine learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kagie, M. J. (2016). Time-of-arrival estimation for saturated optical transients using censored probabilistic models. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/56365
Chicago Manual of Style (16th Edition):
Kagie, Matthew Joseph. “Time-of-arrival estimation for saturated optical transients using censored probabilistic models.” 2016. Masters Thesis, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/56365.
MLA Handbook (7th Edition):
Kagie, Matthew Joseph. “Time-of-arrival estimation for saturated optical transients using censored probabilistic models.” 2016. Web. 07 Mar 2021.
Vancouver:
Kagie MJ. Time-of-arrival estimation for saturated optical transients using censored probabilistic models. [Internet] [Masters thesis]. Georgia Tech; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/56365.
Council of Science Editors:
Kagie MJ. Time-of-arrival estimation for saturated optical transients using censored probabilistic models. [Masters Thesis]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/56365
26.
Tan, Edward S.
Hyper-wideband OFDM system.
Degree: MS, Electrical and Computer Engineering, 2016, Georgia Tech
URL: http://hdl.handle.net/1853/55056
► Hyper-wideband communications represent the next frontier in spread spectrum RF systems with an excess of 10 GHz instantaneous bandwidth. In this thesis, an end-to-end physical…
(more)
▼ Hyper-wideband communications represent the next frontier in spread spectrum RF systems with an excess of 10 GHz instantaneous bandwidth. In this thesis, an end-to-end physical layer link is implemented featuring 16k-OFDM with a 4 GHz-wide channel centered at 9 GHz. No a priori channel state information is assumed; channel information is derived from the preamble and comb pilot structure. Due to the unique expansive spectral properties, the channel estimator is primarily composed of least squares channel estimates combined with a robust support vector statistical learning approach using autonomously selected parameters. The system’s performance is demonstrated through indoor wireless experiments, including line-of-sight and near-line-of-sight links. Moreover, it is shown that the support vector approach performs superior to linear and cubic spline inter/extrapolation of the least squares channel estimates.
Advisors/Committee Members: Ralph, Stephen E. (advisor), Barry, John R. (advisor), Romberg, Justin K. (advisor).
Subjects/Keywords: OFDM; Support vector regression; Hyper-wideband
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tan, E. S. (2016). Hyper-wideband OFDM system. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/55056
Chicago Manual of Style (16th Edition):
Tan, Edward S. “Hyper-wideband OFDM system.” 2016. Masters Thesis, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/55056.
MLA Handbook (7th Edition):
Tan, Edward S. “Hyper-wideband OFDM system.” 2016. Web. 07 Mar 2021.
Vancouver:
Tan ES. Hyper-wideband OFDM system. [Internet] [Masters thesis]. Georgia Tech; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/55056.
Council of Science Editors:
Tan ES. Hyper-wideband OFDM system. [Masters Thesis]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/55056

Georgia Tech
27.
Bales, Michael Ryan.
Illumination compensation in video surveillance analysis.
Degree: PhD, Electrical and Computer Engineering, 2011, Georgia Tech
URL: http://hdl.handle.net/1853/39535
► Problems in automated video surveillance analysis caused by illumination changes are explored, and solutions are presented. Controlled experiments are first conducted to measure the responses…
(more)
▼ Problems in automated video surveillance analysis caused by illumination changes are explored, and solutions are presented. Controlled experiments are first conducted to measure the responses of color targets to changes in lighting intensity and spectrum. Surfaces of dissimilar color are found to respond significantly differently. Illumination compensation model error is reduced by 70% to 80% by individually optimizing model parameters for each distinct color region, and applying a model tuned for one region to a chromatically different region increases error by a factor of 15. A background model – called BigBackground – is presented to extract large, stable, chromatically self-similar background features by identifying the dominant colors in a scene. The stability and chromatic diversity of these features make them useful reference points for quantifying illumination changes. The model is observed to cover as much as 90% of a scene, and pixels belonging to the model are 20% more stable on average than non-member pixels. Several illumination compensation techniques are developed to exploit BigBackground, and are compared with several compensation techniques from the literature. Techniques are compared in terms of foreground / background classification, and are applied to an object tracking pipeline with kinematic and appearance-based correspondence mechanisms. Compared with other techniques, BigBackground-based techniques improve foreground classification by 25% to 43%, improve tracking accuracy by an average of 20%, and better preserve object appearance for appearance-based trackers. All algorithms are implemented in C or C++ to support the consideration of runtime performance. In terms of execution speed, the BigBackground-based illumination compensation technique is measured to run on par with the simplest compensation technique used for comparison, and consistently achieves twice the frame rate of the two next-fastest techniques.
Advisors/Committee Members: Wills, Scott (Committee Chair), Wills, Linda (Committee Co-Chair), Bader, David (Committee Member), Howard, Ayanna (Committee Member), Kim, Jongman (Committee Member), Romberg, Justin (Committee Member).
Subjects/Keywords: Tracking; Color; Computer vision; Background model; BigBackground; Illumination change; Video surveillance; Lighting; Video recording Lighting
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bales, M. R. (2011). Illumination compensation in video surveillance analysis. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/39535
Chicago Manual of Style (16th Edition):
Bales, Michael Ryan. “Illumination compensation in video surveillance analysis.” 2011. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/39535.
MLA Handbook (7th Edition):
Bales, Michael Ryan. “Illumination compensation in video surveillance analysis.” 2011. Web. 07 Mar 2021.
Vancouver:
Bales MR. Illumination compensation in video surveillance analysis. [Internet] [Doctoral dissertation]. Georgia Tech; 2011. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/39535.
Council of Science Editors:
Bales MR. Illumination compensation in video surveillance analysis. [Doctoral Dissertation]. Georgia Tech; 2011. Available from: http://hdl.handle.net/1853/39535

Georgia Tech
28.
Chang, Muya.
Hardware Dynamical System for Solving Optimization Problems.
Degree: PhD, Electrical and Computer Engineering, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/64104
► Optimization problems form the basis of a wide gamut of computationally challenging tasks in signal processing, machine learning, resource planning and so on. Out of…
(more)
▼ Optimization problems form the basis of a wide gamut of computationally challenging tasks in signal processing, machine learning, resource planning and so on. Out of these, convex optimization, and in particular least square optimization, covers a vast majority; and recent advances in iterative algorithms to solve such problems of large dimensions have gained traction. Multi-core designs with systolic or semi-systolic architectures can be a key enabler for implementing discrete dynamical systems and realize massively scalable architectures to solve such optimization algorithms.
In the first part of the thesis, we propose a platform architecture implemented in programmable FPGA hardware to solve a template problem in distributed optimization, namely signal reconstruction from non-uniform sampling. This is a quintessential problem with wide-spread applications in signal processing, computational imaging etc. We expect such an architectural exploration to open up promising opportunities to solve distributed optimizations that are becoming increasingly important in real-world applications. The complete system design, mapping and optimization into an FPGA architecture as well as analysis of convergence and scalability have been presented. Due to the limitation of FPGA, we were motivated to move on to ASIC design, the next project.
In the second part of the thesis, we present OPTIMO, a 65nm, 16-b, fully-programmable, spatial-array processor with 49-cores and a hierarchical multi-cast network for solving distributed optimizations via the alternating direction method of multipliers (ADMM). ADMM is a projection based method for solving generic constrained optimizations problems. In essence, it relies upon decomposing the decision vector into subvectors, updating sequentially by minimizing an augmented Lagrangian function, and eventually updating the Lagrange multiplier. The ADMM algorithm has typically been used for solving problems in which the decision variable is decomposed into two or multiple subvectors. We demonstrate six template algorithms and their applications and we measure a peak energy efficiency of 279 GOPS/W.
In the last part, we switch to another side of optimization, combinatorial optimization, and present AC-SAT, an analog based circuits using traditional CMOS technology for solving a representative NP-complete optimization problem, the Boolean Satisfiability (SAT) problem. AC-SAT is based on the deterministic continuous-time dynamical system (CTDS) and finds SAT solutions in analog polynomial time with the expense of auxiliary variables growing exponentially when needed. The overall design is programmable, modular, and has been validated through multiple stages, from high level simulation on the general purpose CPUs, low level simulation through Simulation Program with Integrated Circuit Emphasis (SPICE), all the way to the measurement on the fabricated chip. The system is capable of solving up the problem within 50 variables and 212 clauses. Through the measurement result, we demonstrate the relationship…
Advisors/Committee Members: Raychowdhury, Arijit (advisor), Romberg, Justin (committee member), Krishna, Tushar (committee member), Bakir, Muhannad S (committee member), Sen, Shreyas (committee member), Bowman, Keith (committee member).
Subjects/Keywords: Optimization; Signal processing algorithms; Distributed databases; Computational modeling; Training; Convex functions; Hardware
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chang, M. (2020). Hardware Dynamical System for Solving Optimization Problems. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/64104
Chicago Manual of Style (16th Edition):
Chang, Muya. “Hardware Dynamical System for Solving Optimization Problems.” 2020. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/64104.
MLA Handbook (7th Edition):
Chang, Muya. “Hardware Dynamical System for Solving Optimization Problems.” 2020. Web. 07 Mar 2021.
Vancouver:
Chang M. Hardware Dynamical System for Solving Optimization Problems. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/64104.
Council of Science Editors:
Chang M. Hardware Dynamical System for Solving Optimization Problems. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/64104
29.
Remenyi, Norbert.
Contributions to Bayesian wavelet shrinkage.
Degree: PhD, Industrial and Systems Engineering, 2012, Georgia Tech
URL: http://hdl.handle.net/1853/45898
► This thesis provides contributions to research in Bayesian modeling and shrinkage in the wavelet domain. Wavelets are a powerful tool to describe phenomena rapidly changing…
(more)
▼ This thesis provides contributions to research in Bayesian modeling and shrinkage in the wavelet domain. Wavelets are a powerful tool to describe phenomena rapidly changing in time, and wavelet-based modeling has become a standard technique in many areas of statistics, and more broadly, in sciences and engineering. Bayesian modeling and estimation in the wavelet domain have found useful applications in nonparametric regression, image denoising, and many other areas. In this thesis, we build on the existing
techniques and propose new methods for applications in nonparametric regression, image denoising, and partially linear models.
The thesis consists of an overview chapter and four main topics. In Chapter 1, we provide an overview of recent developments and the current status of Bayesian wavelet shrinkage research. The chapter contains an extensive literature review consisting of almost 100 references. The main focus of the overview chapter is on nonparametric regression, where the observations come from an unknown function contaminated with Gaussian noise. We present many methods which employ model-based and adaptive shrinkage of the wavelet coefficients through Bayes rules. These includes new developments such as dependence models, complex wavelets, and Markov chain Monte Carlo (MCMC) strategies. Some applications of Bayesian wavelet shrinkage, such as curve classification, are discussed.
In Chapter 2, we propose the Gibbs Sampling Wavelet Smoother (GSWS), an adaptive wavelet denoising methodology. We use the traditional mixture prior on the wavelet coefficients, but also formulate a fully Bayesian hierarchical model in the wavelet domain accounting for the uncertainty of the prior parameters by placing hyperpriors on them. Since a closed-form solution to the Bayes estimator does not exist, the procedure is computational, in which the posterior mean is computed via MCMC simulations. We show how to efficiently develop a Gibbs sampling algorithm for the proposed model. The developed procedure is fully Bayesian, is adaptive to the underlying signal, and provides good denoising performance compared to state-of-the-art methods. Application of the method is illustrated on a real data set arising from the analysis of metabolic pathways, where an iterative shrinkage procedure is developed to preserve the mass balance of the metabolites in the system. We also show how the methodology can be extended to complex wavelet bases.
In Chapter 3, we propose a wavelet-based denoising methodology based on a Bayesian hierarchical model using a double Weibull prior. The interesting feature is that in contrast to the mixture priors traditionally used by some state-of-the-art methods, the wavelet coefficients are modeled by a single density. Two estimators are developed, one based on the posterior mean and the other based on the larger posterior mode; and we show how to calculate these estimators efficiently. The methodology provides good denoising performance, comparable even to state-of-the-art methods that use a mixture…
Advisors/Committee Members: Vidakovic, Brani (Committee Chair), Mei, Yajun (Committee Member), Goldsman, David (Committee Member), Huo, Xiaoming (Committee Member), Romberg, Justin (Committee Member).
Subjects/Keywords: Bayes factor; Bayesian estimation; Bayesian infere; Wavelets (Mathematics); Bayesian statistical decision theory; Mathematical statistics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Remenyi, N. (2012). Contributions to Bayesian wavelet shrinkage. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/45898
Chicago Manual of Style (16th Edition):
Remenyi, Norbert. “Contributions to Bayesian wavelet shrinkage.” 2012. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/45898.
MLA Handbook (7th Edition):
Remenyi, Norbert. “Contributions to Bayesian wavelet shrinkage.” 2012. Web. 07 Mar 2021.
Vancouver:
Remenyi N. Contributions to Bayesian wavelet shrinkage. [Internet] [Doctoral dissertation]. Georgia Tech; 2012. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/45898.
Council of Science Editors:
Remenyi N. Contributions to Bayesian wavelet shrinkage. [Doctoral Dissertation]. Georgia Tech; 2012. Available from: http://hdl.handle.net/1853/45898
30.
Sconyers, Christopher.
Particle filter-based architecture for video target tracking and geo-location using multiple UAVs.
Degree: PhD, Electrical and Computer Engineering, 2013, Georgia Tech
URL: http://hdl.handle.net/1853/47734
► Research in the areas of target detection, tracking, and geo-location is most important for enabling an unmanned aerial vehicle (UAV) platform to autonomously execute a…
(more)
▼ Research in the areas of target detection, tracking, and geo-location is most important for enabling an unmanned aerial vehicle (UAV) platform to autonomously execute a mission or task without the need for a pilot or operator. Small-class UAVs and video camera sensors complemented with "soft sensors" realized only in software as a combination of a priori knowledge and sensor measurements are called upon to replace the cumbersome precision sensors on-board a large class UAV. The objective of this research is to develop a geo-location solution for use on-board multiple UAVs with mounted video camera sensors only to accurately geo-locate and track a target. This research introduces an estimation solution that combines the power of the particle filter with the utility of the video sensor as a general solution for passive target geo-location on-board multiple UAVs. The particle filter is taken advantage of, with its ability to use all of the available information about the system model, system uncertainty, and the sensor uncertainty to approximate the statistical likelihood of the target state. The geo-location particle filter is tested online and in real-time in a simulation environment involving multiple UAVs with video cameras and a maneuvering ground vehicle as a target. Simulation results show the geo-location particle filter estimates the target location with a high accuracy, the addition of UAVs or particles to the system improves the location estimation accuracy with minimal addition of processing time, and UAV control and trajectory generation algorithms restrict each UAV to a desired range to minimize error.
Advisors/Committee Members: Vachtsevanos, George (Committee Chair), Johnson, Eric (Committee Member), Michaels, Thomas (Committee Member), Romberg, Justin (Committee Member), Yezzi, Anthony (Committee Member).
Subjects/Keywords: Rotorcraft; Localization; Monte Carlo; Particle filter; Multi-modal; State space; Observer; Gimbal; Markov; Drone aircraft; Target acquisition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sconyers, C. (2013). Particle filter-based architecture for video target tracking and geo-location using multiple UAVs. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/47734
Chicago Manual of Style (16th Edition):
Sconyers, Christopher. “Particle filter-based architecture for video target tracking and geo-location using multiple UAVs.” 2013. Doctoral Dissertation, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/47734.
MLA Handbook (7th Edition):
Sconyers, Christopher. “Particle filter-based architecture for video target tracking and geo-location using multiple UAVs.” 2013. Web. 07 Mar 2021.
Vancouver:
Sconyers C. Particle filter-based architecture for video target tracking and geo-location using multiple UAVs. [Internet] [Doctoral dissertation]. Georgia Tech; 2013. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/47734.
Council of Science Editors:
Sconyers C. Particle filter-based architecture for video target tracking and geo-location using multiple UAVs. [Doctoral Dissertation]. Georgia Tech; 2013. Available from: http://hdl.handle.net/1853/47734
◁ [1] [2] [3] ▶
.