You searched for +publisher:"University of Colorado" +contributor:("Christian Ketelsen")
.
Showing records 1 – 5 of
5 total matches.
No search limiters apply to these results.

University of Colorado
1.
Mendoza, Cristian Rafael.
Rays, Waves, and Separatrices.
Degree: MS, Applied Mathematics, 2015, University of Colorado
URL: https://scholar.colorado.edu/appm_gradetds/77
► A qualitative study on the ray and wave dynamics of light in optical waveguides with separatrix geometry is presented herein. The thesis attempts to answer…
(more)
▼ A qualitative study on the ray and wave dynamics of light in optical waveguides with separatrix geometry is presented herein. The thesis attempts to answer the question as to what happens in slab waveguides with a transverse refractive index distribution similar to the effective index distribution of a dual tapering waveguide. Discontinuous perturbations along the optical axis to the slab waveguide are also studied. A low-order method is used in this study. Light is found to be guided due to an interaction of the input signal with dynamical equilibria within the waveguide geometry. Dynamical systems and quantum separatrix crossing theory are utilized to explain paraxial propagation paths and modal power spectra of the segmented waveguide. Light confinement in the guide is reliant on the large number of degenerate higher-order modes present. Alternate solution methods are also discussed.
Advisors/Committee Members: Alan R. Mickelson, Gregory Beylkin, Mark Hoefer, Christian Ketelsen.
Subjects/Keywords: Electrical Engineering; Periodically Segmented Waveguides; Ray Analysis; Separatrix; Wave Analysis; Applied Mathematics; Optics; Physics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mendoza, C. R. (2015). Rays, Waves, and Separatrices. (Masters Thesis). University of Colorado. Retrieved from https://scholar.colorado.edu/appm_gradetds/77
Chicago Manual of Style (16th Edition):
Mendoza, Cristian Rafael. “Rays, Waves, and Separatrices.” 2015. Masters Thesis, University of Colorado. Accessed March 05, 2021.
https://scholar.colorado.edu/appm_gradetds/77.
MLA Handbook (7th Edition):
Mendoza, Cristian Rafael. “Rays, Waves, and Separatrices.” 2015. Web. 05 Mar 2021.
Vancouver:
Mendoza CR. Rays, Waves, and Separatrices. [Internet] [Masters thesis]. University of Colorado; 2015. [cited 2021 Mar 05].
Available from: https://scholar.colorado.edu/appm_gradetds/77.
Council of Science Editors:
Mendoza CR. Rays, Waves, and Separatrices. [Masters Thesis]. University of Colorado; 2015. Available from: https://scholar.colorado.edu/appm_gradetds/77

University of Colorado
2.
Kannan, Karthik.
The Big Picture: Loss Functions at the Dataset Level.
Degree: MS, Computer Science, 2017, University of Colorado
URL: https://scholar.colorado.edu/csci_gradetds/146
► Loss functions play a key role in machine learning optimization problems. Even with their widespread use throughout the field, selecting a loss function tailored…
(more)
▼ Loss functions play a key role in machine learning optimization problems. Even with their widespread use throughout the field, selecting a loss function tailored to a specific problem is more art than science. Literature on the properties of loss functions that might help a practitioner make an informed choice about these loss functions is sparse.
In this thesis, we motivate research on the behavior of loss functions at the level of the dataset as a whole. We begin with a simple experiment that illustrates the differences in these loss functions. We then move on to a well-known attribute of perhaps the most ubiquitous loss function, the squared error. We will then characterize all loss functions that exhibit this property. Finally we end with extensions and possible directions of research in this field.
Advisors/Committee Members: Rafael M. Frongillo, Christian Ketelsen, Stephen Becker.
Subjects/Keywords: Bregman Divergences; Loss Functions; Machine Learning; Property Elicitation; Computer Sciences
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kannan, K. (2017). The Big Picture: Loss Functions at the Dataset Level. (Masters Thesis). University of Colorado. Retrieved from https://scholar.colorado.edu/csci_gradetds/146
Chicago Manual of Style (16th Edition):
Kannan, Karthik. “The Big Picture: Loss Functions at the Dataset Level.” 2017. Masters Thesis, University of Colorado. Accessed March 05, 2021.
https://scholar.colorado.edu/csci_gradetds/146.
MLA Handbook (7th Edition):
Kannan, Karthik. “The Big Picture: Loss Functions at the Dataset Level.” 2017. Web. 05 Mar 2021.
Vancouver:
Kannan K. The Big Picture: Loss Functions at the Dataset Level. [Internet] [Masters thesis]. University of Colorado; 2017. [cited 2021 Mar 05].
Available from: https://scholar.colorado.edu/csci_gradetds/146.
Council of Science Editors:
Kannan K. The Big Picture: Loss Functions at the Dataset Level. [Masters Thesis]. University of Colorado; 2017. Available from: https://scholar.colorado.edu/csci_gradetds/146

University of Colorado
3.
Folberth, James.
Fast and Reliable Methods in Numerical Linear Algebra, Signal Processing, and Image Processing.
Degree: PhD, 2018, University of Colorado
URL: https://scholar.colorado.edu/appm_gradetds/134
► In this dissertation we consider numerical methods for a problem in each of numerical linear algebra, digital signal processing, and image processing for super-resolution…
(more)
▼ In this dissertation we consider numerical methods for a problem in each of numerical linear algebra, digital signal processing, and image processing for super-resolution fluorescence microscopy. We consider first a fast, randomized mixing operation applied to the unpivoted Householder QR factorization. The method is an adaptation of a slower randomized operation that is known to provide a rank-revealing factorization with high probability. We perform a number of experiments to highlight possible uses of our method and give evidence that our algorithm likely also provides a rank-revealing factorization with high probability. In the next chapter we develop fast algorithms for computing the discrete, narrowband cross-ambiguity function (CAF) on a downsampled grid of delay values for the purpose of quickly detecting the location of peaks in the CAF surface. Due to the likelihood of missing a narrow peak on a downsampled grid of delay values, we propose methods to make our algorithms robust against missing peaks. To identify peak locations to high accuracy, we propose a two-step approach: first identify a coarse peak location using one of our delay-decimated CAF algorithms, then compute the CAF on a fine, but very small, grid around the peak to find its precise location. Runtime experiments with our C++ implementations show that our delay-decimated algorithms can give more than an order of magnitude improvement in overall runtime to detect peaks in the CAF surface when compared against standard CAF algorithms. In the final chapter we study non-negative least-squares (NNLS) problems arising from a new technique in super-resolution fluorescence microscopy. The image formation task involves solving many tens of thousands of NNLS problems, each using the same matrix, but different right-hand sides. We take advantage of this special structure by adapting an optimal first-order method to efficiently solve many NNLS problems simultaneously. Our NNLS problems are extremely ill-conditioned, so we also experiment with using a block-diagonal preconditioner and the alternating direction method of multipliers (ADMM) to improve convergence speed. We also develop a safe feature elimination strategy for general NNLS problems. It eliminates features only when they are guaranteed to have weight zero at an optimal point. Our strategy is inspired by recent works in the literature for ℓ
1-regularized least-squares, but a notable exception is that we develop our method to use an inexact, but feasible, primal-dual point pair. This allows us to use feature elimination reliably on the extremely ill-conditioned NNLS problems from our microscopy application. For an example image reconstruction, we use our feature elimination strategy to certify that the reconstructed super-resolved image is unique.
Advisors/Committee Members: Stephen Becker, Jed Brown, Ian Grooms, Christian Ketelsen, Per-Gunnar Martinsson.
Subjects/Keywords: cross-ambiguity function; duality; non-negative least-squares; qr factorization; image processing; Applied Mathematics; Applied Statistics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Folberth, J. (2018). Fast and Reliable Methods in Numerical Linear Algebra, Signal Processing, and Image Processing. (Doctoral Dissertation). University of Colorado. Retrieved from https://scholar.colorado.edu/appm_gradetds/134
Chicago Manual of Style (16th Edition):
Folberth, James. “Fast and Reliable Methods in Numerical Linear Algebra, Signal Processing, and Image Processing.” 2018. Doctoral Dissertation, University of Colorado. Accessed March 05, 2021.
https://scholar.colorado.edu/appm_gradetds/134.
MLA Handbook (7th Edition):
Folberth, James. “Fast and Reliable Methods in Numerical Linear Algebra, Signal Processing, and Image Processing.” 2018. Web. 05 Mar 2021.
Vancouver:
Folberth J. Fast and Reliable Methods in Numerical Linear Algebra, Signal Processing, and Image Processing. [Internet] [Doctoral dissertation]. University of Colorado; 2018. [cited 2021 Mar 05].
Available from: https://scholar.colorado.edu/appm_gradetds/134.
Council of Science Editors:
Folberth J. Fast and Reliable Methods in Numerical Linear Algebra, Signal Processing, and Image Processing. [Doctoral Dissertation]. University of Colorado; 2018. Available from: https://scholar.colorado.edu/appm_gradetds/134

University of Colorado
4.
Fox, Alyson Lindsey.
Algebraic Multigrid(amg) for Graph Laplacian Linear Systems: Extensions of Amg for Signed, Undirected and Unsigned, Directed Graphs.
Degree: PhD, Applied Mathematics, 2017, University of Colorado
URL: https://scholar.colorado.edu/appm_gradetds/96
► Relational datasets are often modeled as an unsigned, undirected graph due the nice properties of the resulting graph Laplacian, but information is lost if…
(more)
▼ Relational datasets are often modeled as an unsigned, undirected graph due the nice properties of the resulting graph Laplacian, but information is lost if certain attributes of the graph are not represented. This thesis presents two generalizations of Algebraic Multigrid (AMG) solvers with graph Laplacian systems for different graph types: applying Gremban’s expansion to extend unsigned graph Laplacian solvers to signed graph Laplacian systems and generalizing techniques in Lean Algebraic Multigrid (LAMG) to a new multigrid solver for unsigned, directed graph Laplacian systems. Signed graphs extend the traditional notion of connections and disconnections to in- clude both favorable and adverse relationships, such as friend-enemy social networks or social networks with “likes” and “dislikes.” Gremban’s expansion is used to transform the signed graph Laplacian into an unsigned graph Laplacian with twice the number of unknowns. By using Gremban’s expansion, we extend current unsigned graph Laplacian solvers’ to signed graph Laplacians. This thesis analyzes the numerical stability and applicability of Grem- ban’s expansion and proves that the error of the solution of the original linear system can be tightly bounded by the error of the expanded system. In directed graphs, some subset of relationships are not reciprocal, such as hyperlink graphs, biological neural networks, and electrical power grids. A new algebraic multigrid algorithm, Nonsymmetric Lean Algebraic Multigrid (NS-LAMG), is proposed, which uses ideas from Lean Algebraic Multigrid, nonsymmetric Smoothed Aggregation, and multigrid solvers for Markov chain stationary distribution systems. Low-degree elimination, intro- duced in Lean Algebraic Multigrid for undirected graphs, is redefined for directed graphs. A semi-adaptive multigrid solver, inspired by low-degree elimination, is instrumented in the setup phase, which can be adapted for Markov chain stationary distributions systems. Nu- merical results shows that NS-LAMG out performs GMRES(k) for real-world, directed graph Laplacian linear systems. Both generalizations enable more choices in modeling decisions for graph Laplacian systems. Due the successfulness of NS-LAMG and other various nonsymmetric AMG (NS-AMG) solvers, a further study of theoretical convergence properties are discussed in this thesis. In particular, a necessary condition known as “weak approximation property”, and a sufficient one, referred to as “strong approximation property” as well as the “super strong approx- imation property” are generalized to nonsymmetric matrices and the various relationships between the approximation properties are proved for the nonsymmetric case. In NS-AMG, if P ̸= R the two-grid error propagation operator for the coarse-grid correction is an oblique projection with respect to any reasonable norm, which can cause the error to increase. A main focal point of this paper is a discussion on the conditions in which the error propagation operator is bounded, as the stability of the error…
Advisors/Committee Members: Tom Manteuffel, Geoff Sanders, John Ruge, Christian Ketelsen, Francois Meyer.
Subjects/Keywords: Algebraic Multigrid; Directed graphs; Graph Laplacians; Gremban's expansion; Signed graphs; Applied Mechanics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Fox, A. L. (2017). Algebraic Multigrid(amg) for Graph Laplacian Linear Systems: Extensions of Amg for Signed, Undirected and Unsigned, Directed Graphs. (Doctoral Dissertation). University of Colorado. Retrieved from https://scholar.colorado.edu/appm_gradetds/96
Chicago Manual of Style (16th Edition):
Fox, Alyson Lindsey. “Algebraic Multigrid(amg) for Graph Laplacian Linear Systems: Extensions of Amg for Signed, Undirected and Unsigned, Directed Graphs.” 2017. Doctoral Dissertation, University of Colorado. Accessed March 05, 2021.
https://scholar.colorado.edu/appm_gradetds/96.
MLA Handbook (7th Edition):
Fox, Alyson Lindsey. “Algebraic Multigrid(amg) for Graph Laplacian Linear Systems: Extensions of Amg for Signed, Undirected and Unsigned, Directed Graphs.” 2017. Web. 05 Mar 2021.
Vancouver:
Fox AL. Algebraic Multigrid(amg) for Graph Laplacian Linear Systems: Extensions of Amg for Signed, Undirected and Unsigned, Directed Graphs. [Internet] [Doctoral dissertation]. University of Colorado; 2017. [cited 2021 Mar 05].
Available from: https://scholar.colorado.edu/appm_gradetds/96.
Council of Science Editors:
Fox AL. Algebraic Multigrid(amg) for Graph Laplacian Linear Systems: Extensions of Amg for Signed, Undirected and Unsigned, Directed Graphs. [Doctoral Dissertation]. University of Colorado; 2017. Available from: https://scholar.colorado.edu/appm_gradetds/96

University of Colorado
5.
Heavner, Nathan.
Building Rank-Revealing Factorizations with Randomization.
Degree: PhD, 2019, University of Colorado
URL: https://scholar.colorado.edu/appm_gradetds/155
► This thesis describes a set of randomized algorithms for computing rank revealing factorizations of matrices. These algorithms are designed specifically to minimize the amount…
(more)
▼ This thesis describes a set of randomized algorithms for computing rank revealing factorizations of matrices. These algorithms are designed specifically to minimize the amount of data movement required, which is essential to high practical performance on modern computing hardware. The work presented builds on existing randomized algorithms for computing low-rank approximations to matrices, but essentially ex- tends the range of applicability of these methods by allowing for the efficient decomposition of matrices of any numerical rank, including full rank matrices. In contrast, existing methods worked well only when the numerical rank was substantially smaller than the dimensions of the matrix. The thesis describes algorithms for computing two of the most popular rank-revealing matrix decom- positions: the column pivoted QR (CPQR) decomposition, and the so called UTV decomposition that factors a given matrix A as A = UTV∗, where U and V have orthonormal columns and T is triangular. For each algorithm, the thesis presents algorithms that are tailored for different computing environments, including multicore shared memory processors, GPUs, distributed memory machines, and matrices that are stored on hard drives (“out of core”). The first chapter of the thesis consists of an introduction that provides context, reviews previous work in the field, and summarizes the key contributions. Beside the introduction, the thesis contains six additional chapters: Chapter 2 introduces a fully blocked algorithm HQRRP for computing a QR factorization with col- umn pivoting. The key to the full blocking of the algorithm lies in using randomized projections to create a low dimensional sketch of the data, where multiple good pivot columns may be cheaply computed. Nu- merical experiments show that HQRRP is several times faster than the classical algorithm for computing a column pivoted QR on a multicore machine, and the acceleration factor increases with the number of cores. Chapter 3 introduces randUTV, a randomized algorithm for computing a rank-revealing factorization of the form A = UTV∗, where U and V are orthogonal and T is upper triangular. RandUTV uses random- ized methods to efficiently build U and V as approximations of the column and row spaces of A. The result is an algorithm that reveals rank nearly as well as the SVD and costs at most as much as a column pivoted QR. Chapter 4 provides optimized implementations for shared and distributed memory architectures. For shared memory, we show that formulating randUTV as an algorithm-by-blocks increases its efficiency in parallel. The fifth chapter implements randUTV on the GPU and augments the algorithm with an over- sampling technique to further increase the low rank approximation properties of the resulting factorization. Chapter 6 implements both randUTV and HQRRP for use with matrices stored out of core. It is shown that reorganizing HQRRP as a left-looking algorithm to reduce the number of writes to the drive is in the tested cases…
Advisors/Committee Members: Per-Gunnar Martinsson, Stephen Becker, Gregory Beylkin, Gregorio Quintana-Ortí, Christian Ketelsen.
Subjects/Keywords: linear algebra; matrix factorizations; randomization; rank-revealing factorizations; Applied Mathematics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Heavner, N. (2019). Building Rank-Revealing Factorizations with Randomization. (Doctoral Dissertation). University of Colorado. Retrieved from https://scholar.colorado.edu/appm_gradetds/155
Chicago Manual of Style (16th Edition):
Heavner, Nathan. “Building Rank-Revealing Factorizations with Randomization.” 2019. Doctoral Dissertation, University of Colorado. Accessed March 05, 2021.
https://scholar.colorado.edu/appm_gradetds/155.
MLA Handbook (7th Edition):
Heavner, Nathan. “Building Rank-Revealing Factorizations with Randomization.” 2019. Web. 05 Mar 2021.
Vancouver:
Heavner N. Building Rank-Revealing Factorizations with Randomization. [Internet] [Doctoral dissertation]. University of Colorado; 2019. [cited 2021 Mar 05].
Available from: https://scholar.colorado.edu/appm_gradetds/155.
Council of Science Editors:
Heavner N. Building Rank-Revealing Factorizations with Randomization. [Doctoral Dissertation]. University of Colorado; 2019. Available from: https://scholar.colorado.edu/appm_gradetds/155
.