You searched for +publisher:"Cornell University" +contributor:("Ruppert, David")
.
Showing records 1 – 13 of
13 total matches.
No search limiters apply to these results.

Cornell University
1.
Liu, Yang.
Nonparametric Regression and Density Estimation on a Network.
Degree: PhD, Statistics, 2020, Cornell University
URL: http://hdl.handle.net/1813/103360
► We propose nonparametric regression and density estimators on a network. A network is defined as a collection of edges that are connected by vertices. There…
(more)
▼ We propose nonparametric regression and density estimators on a network. A network is defined as a collection of edges that are connected by vertices. There are numerous types of networks, such as streets, subway lines, electrical wires, airline routes, or nerve fibers and blood vessels. While broadly applicable, our methodology focuses on the challenging cases in which the best estimator near a vertex depends on the amount of smoothness at the vertex. To estimate the function in a neighborhood of a vertex, a two-step procedure is proposed. The first step of this pretest estimator fits a separate local polynomial regression on each edge, and then tests for equality of the estimates at the vertex. If the null hypothesis is not rejected, the second step re-estimates the function in a small neighborhood of the vertex, subject to a joint equality constraint. Since the derivative of the function may be discontinuous at the vertex, a piecewise polynomial local regression estimate is used to model the change in slope. Our approach removes the bias near a vertex that has been noted for existing methods, which typically do not allow for discontinuity at vertices. The implementation of our approach is fast, the computation time scales only with data sub-linearly. We use the model to estimate the desntiy of spines on a dendritic tree. Despite the simple intuition and easy implementation of the two-step procedure, it leaves the significance level of the test a tuning parameter and the type II error of test is difficult to analyse. As an alternative approach, we minimise the MSE of the estimator near the vertex by penalising the l
2 norm of the jumps at the vertex. We propose a method for estimating the locally optimal bandwidth and penalty parameter, and derive their error bounds. In order to derive the rate of convergence of the selector, we develop a uniform almost sure asymptotic theory of our model. The theory is also of interest on its own right. We show that the uniform almost sure asymptotic theory holds with probability 1, uniformly over the neighborhood of a vertex, bandwidth and penalty parameters. We apply our model to New York City taxi data and study how the average trip cost for a taxi ride changes with trip origin along the streets of Manhattan. Finally we study the asymptotic properties of the penalized estimator with l
r penalty, for r>0. We derive the limiting distributions of the estimates, and show that, under appropriate conditions, the limiting distributions of the jumps put positive probability mass at zero, so we can obtain continuous estimates at the vertex. We develop a bootstrap estimator of the bias and variance of the proposed penalised estimator, and justify it using asymptotic arguments.
Advisors/Committee Members: Ruppert, David (chair), Frazier, Peter (committee member), Guinness, Joe (committee member).
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, Y. (2020). Nonparametric Regression and Density Estimation on a Network. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/103360
Chicago Manual of Style (16th Edition):
Liu, Yang. “Nonparametric Regression and Density Estimation on a Network.” 2020. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/103360.
MLA Handbook (7th Edition):
Liu, Yang. “Nonparametric Regression and Density Estimation on a Network.” 2020. Web. 21 Apr 2021.
Vancouver:
Liu Y. Nonparametric Regression and Density Estimation on a Network. [Internet] [Doctoral dissertation]. Cornell University; 2020. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/103360.
Council of Science Editors:
Liu Y. Nonparametric Regression and Density Estimation on a Network. [Doctoral Dissertation]. Cornell University; 2020. Available from: http://hdl.handle.net/1813/103360

Cornell University
2.
Li, Yingxing.
Aspects Of Penalized Splines.
Degree: PhD, Statistics, 2011, Cornell University
URL: http://hdl.handle.net/1813/33653
► Penalized splines approach has very important applications in statistics. The idea is to fit the unknown regression mean using a high-dimensional spline basis, subject to…
(more)
▼ Penalized splines approach has very important applications in statistics. The idea is to fit the unknown regression mean using a high-dimensional spline basis, subject to penalization on the roughness. Such an approach avoids the stringency of a parametric model and enables a considerable reduction in computational cost without a loss of statistical precision. Moreover, the idea can also be connected with ridge regression and mixed models, thus allowing more flexible handling of longitudinal and spatial correlation. This thesis focuses on nonparametric and semiparametric estimation and inference using penalized splines. First, we consider the penalized splines approach proposed by Eilers and Marx (1996), which is also called P-splines approach. We derive its asymptotic property when the number of spline basis increases as the sample size does. For both the univariate model and the additive models, we establish the asymptotic distribution of the estimators and give simple expressions for the asymptotic mean and variance. Such an asymptotic theory allows P-splines estimators to be compared theoretically with other nonparametric estimators and offers guidance for practitioners when considering the choice of the penalty and basis functions. Next, we turn to the global inferential problems for functional data. We model the population mean function using polynomial splines. By utilizing the mixed model based penalized splines approach, we treat some of the spline coefficients as random effects with a single variance component and relate hy- potheses of interest to tests with this variance component being zero. To take into account the dependent structure or within subject correlation, we propose a pseudo likelihood test statistic and derive its null distribution. This work extends existing results on pseudo likelihood by allowing the use of nonparametric smoothing (usually with a slower convergence rate). Its effectiveness is demonstrated via simulations and the empirical application from the Sleep Health Heart Study.
Advisors/Committee Members: Ruppert, David (chair), Booth, James (committee member), Hooker, Giles J. (committee member).
Subjects/Keywords: P-splines; asymptotics; global inference
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, Y. (2011). Aspects Of Penalized Splines. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/33653
Chicago Manual of Style (16th Edition):
Li, Yingxing. “Aspects Of Penalized Splines.” 2011. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/33653.
MLA Handbook (7th Edition):
Li, Yingxing. “Aspects Of Penalized Splines.” 2011. Web. 21 Apr 2021.
Vancouver:
Li Y. Aspects Of Penalized Splines. [Internet] [Doctoral dissertation]. Cornell University; 2011. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/33653.
Council of Science Editors:
Li Y. Aspects Of Penalized Splines. [Doctoral Dissertation]. Cornell University; 2011. Available from: http://hdl.handle.net/1813/33653

Cornell University
3.
Soiaporn, Kunlaya.
On The Modeling Of Multiple Functional Outcomes With Spatially Heterogeneous Shape Characteristics.
Degree: PhD, Operations Research, 2014, Cornell University
URL: http://hdl.handle.net/1813/36106
► This dissertation presents an approach for analyzing functional data with multiple outcomes that exhibits spatially heterogeneous shape characteristics. An example of data of this type…
(more)
▼ This dissertation presents an approach for analyzing functional data with multiple outcomes that exhibits spatially heterogeneous shape characteristics. An example of data of this type that motivated this study is a data from a diffusion tensor imaging (DTI) study of neuronal tract in multiple sclerosis (MS) patients. DTI is an imaging technique for measuring the diffusion of water that can be used to detect abnormalities in brain tissue. DTI tractography can be summarized by 3 functional outcomes, measuring the diffusion in different directions. One of the main and most common difficulties in functional data analysis is the large number of parameters to be estimated. This is especially challenging when multiple functional outcomes are considered. To accommodate this problem, a copula approach is adopted so that the marginal distribution and the dependence structure are estimated independently. In addition to fast computation, the two-step approach also allows flexibility in the specification of the distribution of the data as the marginal distribution and copula distribution can be specified separately. The first part of this dissertation presents an estimation algorithm using the copula approach. The marginal distribution parameters are estimated using methodology based on maximum likelihood and penalized splines. In the estimation for the dependence structure, the Karhunen-Loeve expansion and an EM algorithm are used to significantly reduce the dimension of the problem. This allows the dependence within the same outcome and across different outcomes to be captured even in the case of many functional outcomes. The second part of this dissertation demonstrates the application of the methodology to the DTI study. The goal is to identify the locations where the abnormalities occur and also explain the characteristics of the abnormalities in MS patients. The difference in the marginal distributions and structure dependence in the MS group from the healthy control group is then used to develop a method for predicting case status for patients. The last part of the dissertation explores the DTI study in longitudinal setting. A larger dataset that contains DTI data from multiple visits is studied. We adopted a multilevel approach to investigate how the DTI tractography in MS patients varies over time.
Advisors/Committee Members: Ruppert, David (chair), Jarrow, Robert A. (committee member), Frazier, Peter (committee member).
Subjects/Keywords: correlated functional outcomes; diffusion tensor imaging; skewed functional data
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Soiaporn, K. (2014). On The Modeling Of Multiple Functional Outcomes With Spatially Heterogeneous Shape Characteristics. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/36106
Chicago Manual of Style (16th Edition):
Soiaporn, Kunlaya. “On The Modeling Of Multiple Functional Outcomes With Spatially Heterogeneous Shape Characteristics.” 2014. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/36106.
MLA Handbook (7th Edition):
Soiaporn, Kunlaya. “On The Modeling Of Multiple Functional Outcomes With Spatially Heterogeneous Shape Characteristics.” 2014. Web. 21 Apr 2021.
Vancouver:
Soiaporn K. On The Modeling Of Multiple Functional Outcomes With Spatially Heterogeneous Shape Characteristics. [Internet] [Doctoral dissertation]. Cornell University; 2014. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/36106.
Council of Science Editors:
Soiaporn K. On The Modeling Of Multiple Functional Outcomes With Spatially Heterogeneous Shape Characteristics. [Doctoral Dissertation]. Cornell University; 2014. Available from: http://hdl.handle.net/1813/36106

Cornell University
4.
Xiao, Luo.
Topics In Bivariate Spline Smoothing.
Degree: PhD, Statistics, 2012, Cornell University
URL: http://hdl.handle.net/1813/31092
► Penalized spline methods have been popular since the work of Eilers and Marx (1996). Recent years saw extensive theoretical studies and a wide range of…
(more)
▼ Penalized spline methods have been popular since the work of Eilers and Marx (1996). Recent years saw extensive theoretical studies and a wide range of applications of penalized splines. In this dissertation, we consider penalized splines for smoothing two-dimensional data. In Chapter 2, we propose a new spline smoother, the sandwich smoother, for smoothing data on a rectangular grid. Univariate P-spline smoothers are applied simultaneously along both coordinates. The sandwich smoother has a tensor product structure that simplifies an asymptotic analysis and it can be fast computed. We derive a local central limit theorem for the sandwich smoother, with simple expressions for the asymptotic bias and variance, by showing that the sandwich smoother is asymptotically equivalent to a bivariate kernel regression estimator with a product kernel. As far as we are aware, this is the first central limit theorem for a bivariate spline estimator of any type. Our simulation study shows that the sandwich smoother is orders of magnitude faster to compute than other bivariate spline smoothers, even when the latter are computed using a fast GLAM (Generalized Linear Array Model) algorithm, and comparable to them in terms of mean squared integrated errors. One important application of the sandwich smoother is to estimate covariance functions in functional data analysis. In this application, our numerical results show that the sandwich smoother is orders of magnitude faster than local linear regression. In Chapter 3, based on the sandwich smoother, we propose a fast covariance function estimation method (FACE) for smoothing high-dimensional functional data. We show that our method overcomes the computational difficulty of common bivariate smoothers for smoothing high-dimensional covariance operators, and in particular we derive a fast algorithm for selecting the smoothing parameter. We also show that through FACE we can simultaneously obtain the smoothed covariance operator and its associated eigenfunctions. For functional principal component analysis, we derive a fast method for calculating the principal scores. A simulation study is done to illustrate the computational speed of FACE. Although not a focus of this dissertation, we present in Appendix A a theoretical study of the local asymptotics of P-splines for the univariate case. In this work we derived the local asymptotic distribution of P-splines at both an interior point and near the boundary. Some of the results in the work are used in studying the sandwich smoother.
Advisors/Committee Members: Ruppert, David (chair), Hooker, Giles J. (committee member), Strawderman, Robert Lee (committee member).
Subjects/Keywords: penalized splines; nonparametric regression; smoothing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xiao, L. (2012). Topics In Bivariate Spline Smoothing. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/31092
Chicago Manual of Style (16th Edition):
Xiao, Luo. “Topics In Bivariate Spline Smoothing.” 2012. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/31092.
MLA Handbook (7th Edition):
Xiao, Luo. “Topics In Bivariate Spline Smoothing.” 2012. Web. 21 Apr 2021.
Vancouver:
Xiao L. Topics In Bivariate Spline Smoothing. [Internet] [Doctoral dissertation]. Cornell University; 2012. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/31092.
Council of Science Editors:
Xiao L. Topics In Bivariate Spline Smoothing. [Doctoral Dissertation]. Cornell University; 2012. Available from: http://hdl.handle.net/1813/31092

Cornell University
5.
Magnusson, Baldur.
Targeted Therapies: Adaptive Sequential Designs For Subgroup Selection In Clinical Trials.
Degree: PhD, Operations Research, 2011, Cornell University
URL: http://hdl.handle.net/1813/29238
► A critical part of clinical trials in drug development is the analysis of treatment efficacy in patient subgroups (subpopulations). Due to multiplicity and the small…
(more)
▼ A critical part of clinical trials in drug development is the analysis of treatment efficacy in patient subgroups (subpopulations). Due to multiplicity and the small sample sizes involved, this analysis presents substantial statistical challenges and can lead to misleading conclusions. In this thesis, we develop methodology for statistically valid subgroup analysis in a variety of settings. First, we consider a number of trial designs of varying flexibility for the case of one subgroup of interest. Some procedures are novel, while others are adapted from the literature. Included is data-driven consideration of adaptive change of subject eligibility criteria-known as adaptive enrichment-whereby apparently nonresponsive patient populations are not recruited after data has been unblinded for an interim analysis. We conduct an extensive numerical study to investigate design operating characteristics, as well as sensitivity to subgroup prevalence and interim analysis timing. We observe that power gains can be substantial when a treatment is only effective in the subgroup of interest. Following this example, selected procedures are generalized to allow for analysis of an arbitrary number of subgroups. Next, we propose a K -stage group sequential design that can be applied as a confirmatory seamless Phase II/III design. The design is specified through upper and lower spending functions, defined in terms of calendar times. After the first stage, poorly performing subgroups are eliminated and the remaining population is pooled for the duration of the trial. This procedure combines the elimination of non-sensitive subgroups with the definitive assessment of treatment efficacy associated with traditional group sequential designs. Numerical examples show that the procedure has high power to detect subgroup-specific effects, and the use of multiple interim analysis points can lead to substantial sample size savings. We address the challenges of adjusting for selection bias, and protecting the familywise error rate in the strong sense. All designs are presented either in terms of standardized test statistics or the efficient score, making the analysis of normal, binary, or time-to-event data straightforward.
Advisors/Committee Members: Turnbull, Bruce William (chair), Jarrow, Robert A. (committee member), Ruppert, David (committee member).
Subjects/Keywords: Clinical trial design; Subgroup selection; Multiple comparison procedures
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Magnusson, B. (2011). Targeted Therapies: Adaptive Sequential Designs For Subgroup Selection In Clinical Trials. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/29238
Chicago Manual of Style (16th Edition):
Magnusson, Baldur. “Targeted Therapies: Adaptive Sequential Designs For Subgroup Selection In Clinical Trials.” 2011. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/29238.
MLA Handbook (7th Edition):
Magnusson, Baldur. “Targeted Therapies: Adaptive Sequential Designs For Subgroup Selection In Clinical Trials.” 2011. Web. 21 Apr 2021.
Vancouver:
Magnusson B. Targeted Therapies: Adaptive Sequential Designs For Subgroup Selection In Clinical Trials. [Internet] [Doctoral dissertation]. Cornell University; 2011. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/29238.
Council of Science Editors:
Magnusson B. Targeted Therapies: Adaptive Sequential Designs For Subgroup Selection In Clinical Trials. [Doctoral Dissertation]. Cornell University; 2011. Available from: http://hdl.handle.net/1813/29238

Cornell University
6.
Kim, Sungjin.
ESSAYS ON BAYESIAN ANALYSES FOR BETTER MARKETING AND A BETTER WORLD.
Degree: PhD, Management, 2020, Cornell University
URL: http://hdl.handle.net/1813/102934
► In the first chapter, the authors propose a new Bayesian synthetic control framework to overcome limitations of extant synthetic control methods (SCMs). The proposed Bayesian…
(more)
▼ In the first chapter, the authors propose a new Bayesian synthetic control framework to overcome limitations of extant synthetic control methods (SCMs). The proposed Bayesian synthetic control methods (BSCMs) do not impose any restrictive constraints on the parameter space a priori. Moreover, the proposed model provide statistical inference in a straightforward manner and a natural mechanism to deal with the “large p, small n” and sparsity problems through Markov Chain Monte Carlo (MCMC) procedures. The authors find via simulations that for a variety of data generating processes, the proposed BSCMs almost always provide better predictive accuracy and parameter precision than extant SCMs. They demonstrate an application of the proposed BSCMs to a real-world context of a tax imposed on soda sales in Washington state in 2010. As in the simulations, the proposed models outperform extant models, as measured by predictive accuracy in the post-treatment periods. They find that the tax led to an increase of 5.7% in retail price and a decrease of 5.5∼5.8% in sales. They also find that retailers in Washington over-shifted the tax to consumers, leading to a pass-through rate of about 121%. In the second chapter, the authors develop a utility-based multiple discrete-continuous model of charitable giving that provides insights into potentially large differences in individuals' giving behaviors across forms of giving. The model also incorporates via Bayesian Gaussian processes changes in givers’ preferences for forms of giving, as the relationship with the NPO evolves. The authors apply their model to five years of giving data of a cohort of individuals. They find that the effects of lifetime, recency, seasonality, and responsiveness to appeals of donation and membership options change non-monotonically over time in distinctive ways. Moreover, they find substantial individual heterogeneity in preference for forms of giving. The authors demonstrate that the model estimates help to predictively identify who will give in multiple forms in the future, and to build appeal targeting strategies. In the third chapter, the authors try to answer two questions: i) What are the long-term impacts of the Philadelphia beverage tax on sales and prices of taxed beverage categories?, and ii) Do the impacts of the beverage tax spill over to other non-taxed product categories? They plan to empirically investigate various healthy and unhealthy potential complements and substitutes of the beverage category to study the spillover effects of the beverage tax in Philadelphia. The goal of essay 3 is to propose a research idea for which the empirical analysis would be completed subsequent to the Ph.D.
Advisors/Committee Members: Gupta, Sachin (chair), Kadiyali, Vrinda (committee member), Lee, Clarence (committee member), Ruppert, David (committee member).
Subjects/Keywords: Bayesian Estimation; Charitable Giving; Nonprofit; Soda Tax; Synthetic Control; Treatment Effect
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kim, S. (2020). ESSAYS ON BAYESIAN ANALYSES FOR BETTER MARKETING AND A BETTER WORLD. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/102934
Chicago Manual of Style (16th Edition):
Kim, Sungjin. “ESSAYS ON BAYESIAN ANALYSES FOR BETTER MARKETING AND A BETTER WORLD.” 2020. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/102934.
MLA Handbook (7th Edition):
Kim, Sungjin. “ESSAYS ON BAYESIAN ANALYSES FOR BETTER MARKETING AND A BETTER WORLD.” 2020. Web. 21 Apr 2021.
Vancouver:
Kim S. ESSAYS ON BAYESIAN ANALYSES FOR BETTER MARKETING AND A BETTER WORLD. [Internet] [Doctoral dissertation]. Cornell University; 2020. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/102934.
Council of Science Editors:
Kim S. ESSAYS ON BAYESIAN ANALYSES FOR BETTER MARKETING AND A BETTER WORLD. [Doctoral Dissertation]. Cornell University; 2020. Available from: http://hdl.handle.net/1813/102934

Cornell University
7.
Maxwell, Matthew.
Approximate Dynamic Programming Policies And Performance Bounds For Ambulance Redeployment.
Degree: PhD, Operations Research, 2011, Cornell University
URL: http://hdl.handle.net/1813/29425
► Ambulance redeployment is the practice of dynamically relocating idle ambulances based upon real-time information to reduce expected response times for future emergency calls. Ambulance redeployment…
(more)
▼ Ambulance redeployment is the practice of dynamically relocating idle ambulances based upon real-time information to reduce expected response times for future emergency calls. Ambulance redeployment performance is often measured by the fraction of "lost calls" or calls with response times larger than a given threshold time. This dissertation is a collection of four papers detailing results for designing ambulance redeployment policies and bounding the performance of an optimal ambulance redeployment policy. In the first paper ambulance redeployment is modeled as a Markov decision process, and an approximate dynamic programming (ADP) policy is formulated for this model. Computational results show that the ADP policy is able to outperform benchmark policies in two different case studies based on real-life data. Results of practical concern including how the ADP policy performs with varying call arrival rates and varying ambulance fleet sizes are also included. In the second paper we discuss ADP tuning procedures, i.e., the process of selecting policy parameters to improve performance. We highlight limitations present in many ADP tuning procedures and propose direct-search tuning methods to overcome these limitations. To facilitate direct-search tuning for ambulance redeployment, we reformulate the ADP policy using the so-called "post-decision state" formulation. This reformulation allows policy decisions to be computed without computationally expensive simulations and makes direct- search tuning computationally feasible. In the third paper we prove that many ADP policies are equivalent to a simpler class of policies called nested compliance table (NCT) policies that assign ambulances to bases according to the total number of available ambulances. Furthermore, we show that if ambulances are not assigned to the bases dictated by the NCT policy, the ADP-based policies will restore compliance to the NCT policy without dispatcher intervention. In the fourth paper we derive a computationally tractable lower bound on the minimum fraction of lost calls and propose a heuristic bound based upon simulation data from a reference policy, i.e., a policy we believe to be close to optimal. In certain circumstances, both bounds can be quite loose so we introduce a stylized model of ambulance redeployment and show empirically that for this model the lower bound is quite tight.
Advisors/Committee Members: Henderson, Shane G. (chair), Ruppert, David (committee member), Lewis, Mark E. (committee member), Topaloglu, Huseyin (committee member).
Subjects/Keywords: emergency services; approximate dynamic programming; post-decision state
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Maxwell, M. (2011). Approximate Dynamic Programming Policies And Performance Bounds For Ambulance Redeployment. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/29425
Chicago Manual of Style (16th Edition):
Maxwell, Matthew. “Approximate Dynamic Programming Policies And Performance Bounds For Ambulance Redeployment.” 2011. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/29425.
MLA Handbook (7th Edition):
Maxwell, Matthew. “Approximate Dynamic Programming Policies And Performance Bounds For Ambulance Redeployment.” 2011. Web. 21 Apr 2021.
Vancouver:
Maxwell M. Approximate Dynamic Programming Policies And Performance Bounds For Ambulance Redeployment. [Internet] [Doctoral dissertation]. Cornell University; 2011. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/29425.
Council of Science Editors:
Maxwell M. Approximate Dynamic Programming Policies And Performance Bounds For Ambulance Redeployment. [Doctoral Dissertation]. Cornell University; 2011. Available from: http://hdl.handle.net/1813/29425

Cornell University
8.
Steingrimsson, Jon.
Information Recovery With Missing Data When Outcomes Are Right Censored.
Degree: PhD, Statistics, 2015, Cornell University
URL: http://hdl.handle.net/1813/41100
► This dissertation focuses on utilizing information more efficiently in several settings when some observations are right-censored using the semiparametric efficiency theory developed in Robins et…
(more)
▼ This dissertation focuses on utilizing information more efficiently in several settings when some observations are right-censored using the semiparametric efficiency theory developed in Robins et al. (1994). Chapter 2 focuses on estimation of the regression parameter in the semiparametric accelerated failure time model when the data is collected using a case-cohort design. The previously proposed methods of estimation use some form of HorvitzThompsons estimators which are known to be inefficient and the main aim of Chapter 2 is to improve efficiency of estimation of the regression parameter for the accelerated failure time model for case-cohort studies. We derive the semiparametric information bound and propose a more practical class of augmented estimators motivated by the augmentation theory developed in Robins et al. (1994). We develop large sample properties, identify the most efficient estimator within the class of augmented estimators, and give practical guidance on how to calculate the estimator. Regression trees are non-parametric methods that use reduction in loss to partition the covariate space into binary partitions creating a prediction model that is easily interpreted and visualized. When some observations are censored the full data loss function is not a function of the observed data and Molinaro et al. (2004) used inverse probability weighted estimators to extend the loss functions to right-censored outcomes. Motivated by semiparametric efficiency theory Chapter 3 extends the approach in Molinaro et al. (2004) by using doubly robust loss function that utilize information on censored observations better in addition to being more robust to the modeling choices that need to be made. Regression trees are known to suffer from instability with minor changes in the data sometimes resulting in very different trees. Ensemble based methods that average several trees have been shown to lead to prediction models that usually have smaller prediction error. One such ensemble based method is random forests Breiman (2001) and in Chapter 4 we use the regression tree methodology developed in Chapter 3 as building blocks to random forests.
Advisors/Committee Members: Hooker,Giles J. (chair), Strawderman,Robert Lee (coChair), Wells,Martin Timothy (committee member), Ruppert,David (committee member).
Subjects/Keywords: Missing Data; Semiparametric Theory; Censored Data
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Steingrimsson, J. (2015). Information Recovery With Missing Data When Outcomes Are Right Censored. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/41100
Chicago Manual of Style (16th Edition):
Steingrimsson, Jon. “Information Recovery With Missing Data When Outcomes Are Right Censored.” 2015. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/41100.
MLA Handbook (7th Edition):
Steingrimsson, Jon. “Information Recovery With Missing Data When Outcomes Are Right Censored.” 2015. Web. 21 Apr 2021.
Vancouver:
Steingrimsson J. Information Recovery With Missing Data When Outcomes Are Right Censored. [Internet] [Doctoral dissertation]. Cornell University; 2015. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/41100.
Council of Science Editors:
Steingrimsson J. Information Recovery With Missing Data When Outcomes Are Right Censored. [Doctoral Dissertation]. Cornell University; 2015. Available from: http://hdl.handle.net/1813/41100

Cornell University
9.
Zhu, Liao.
The Adaptive Multi-Factor Model and the Financial Market.
Degree: PhD, Statistics, 2020, Cornell University
URL: http://hdl.handle.net/1813/70419
► Modern evolvements of the technologies have been leading to a profound influence on the financial market. The introduction of constituents like Exchange-Traded Funds, and the…
(more)
▼ Modern evolvements of the technologies have been leading to a profound influence on the financial market. The introduction of constituents like Exchange-Traded Funds, and the wide-use of advanced technologies such as algorithmic trading, results in a boom of the data which provides more opportunities to reveal deeper insights. However, traditional statistical methods always suffer from the high-dimensional, high-correlation, and time-varying instinct of the financial data. In this dissertation, we focus on developing techniques to stress these difficulties. With the proposed methodologies, we can have more interpretable models, clearer explanations, and better predictions. We start from proposing a new algorithm for the high-dimensional financial data – the Groupwise Interpretable Basis Selection (GIBS) algorithm, to estimate a new Adaptive Multi-Factor (AMF) asset pricing model, implied by the recently developed Generalized Arbitrage Pricing Theory, which relaxes the convention that the number of risk-factors is small. We first obtain an adaptive collection of basis assets and then simultaneously test which basis assets correspond to which securities. Since the collection of basis assets is large and highly correlated, high-dimension methods are used. The AMF model along with the GIBS algorithm is shown to have significantly better fitting and prediction power than the Fama-French 5-factor model. Next, we do the time-invariance tests for the betas for both the AMF model and the FF5 in various time periods. We show that for nearly all time periods with length less than 6 years, the β coefficients are time-invariant for the AMF model, but not the FF5 model. The β coefficients are time-varying for both AMF and FF5 models for longer time periods. Therefore, using the dynamic AMF model with a decent rolling window (such as 5 years) is more powerful and stable than the FF5 model. We also successfully provide a new explanation of the well-known low-volatility anomaly which pervades in the finance literature for a long time. We use the Adaptive Multi-Factor (AMF) model estimated by the Groupwise Interpretable Basis Selection (GIBS) algorithm to find those basis assets significantly related to low and high volatility portfolios. These two portfolios load on very different factors, which indicates that volatility is not an independent risk, but that it is related to existing risk factors. The out-performance of the low-volatility portfolio is due to the (equilibrium) performance of these loaded risk factors. For completeness, we compare the AMF model with the traditional Fama-French 5-factor (FF5) model, documenting the superior performance of the AMF model.
Advisors/Committee Members: Wells, Martin (chair), Jarrow, Robert (committee member), Ruppert, David (committee member), Mimno, David (committee member), Matteson, David (committee member).
Subjects/Keywords: AMF model; Asset pricing; GIBS algorithm; high-dimensional statistics; low-volatility anomaly; machine learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhu, L. (2020). The Adaptive Multi-Factor Model and the Financial Market. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/70419
Chicago Manual of Style (16th Edition):
Zhu, Liao. “The Adaptive Multi-Factor Model and the Financial Market.” 2020. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/70419.
MLA Handbook (7th Edition):
Zhu, Liao. “The Adaptive Multi-Factor Model and the Financial Market.” 2020. Web. 21 Apr 2021.
Vancouver:
Zhu L. The Adaptive Multi-Factor Model and the Financial Market. [Internet] [Doctoral dissertation]. Cornell University; 2020. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/70419.
Council of Science Editors:
Zhu L. The Adaptive Multi-Factor Model and the Financial Market. [Doctoral Dissertation]. Cornell University; 2020. Available from: http://hdl.handle.net/1813/70419

Cornell University
10.
McLean, Mathew.
On Generalized Additive Models For Regression With Functional Data.
Degree: PhD, Operations Research, 2013, Cornell University
URL: http://hdl.handle.net/1813/34323
► The focus of this dissertation is the introduction of the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar…
(more)
▼ The focus of this dissertation is the introduction of the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. The FGAM extends the commonly used functional linear model (FLM), offering greater flexibility while still being simple to interpret and easy to estimate. The link-transformed mean response is modelled as the integral with respect to t of F {X (t), t} where F ([MIDDLE DOT], [MIDDLE DOT]) is an unknown, bivariate regression function and X (t) is a functional covariate. Compare this with the FLM which has F {X (t), t} = [beta] (t)X (t), where [beta] (t) is an unknown coefficient function. Rather than having an additive model in some projection of the data, the model incorporates the functional predictor directly and thus can be viewed as the natural functional extension of generalized additive models. The first part of the dissertation shows how to estimate F ([MIDDLE DOT], [MIDDLE DOT]) using tensorproduct B-splines with roughness penalties. Fast, stable methods are used to fit the FGAM and I discuss how approximate confidence bands can be constructed for the true regression surface. Additional functional predictors can be included with little added difficulty. The performance of the estimation procedure and the confidence bands is evaluated using simulated data and I compare FGAM's predictive performance with other competing scalar-on-function regression alternatives, including the popular functional linear model. I illustrate the usefulness of the approach through an application to brain tractography, where X (t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing estimation, plotting, and prediction for the FGAM is explained and is available in the package refund on CRAN. Frequently in practise, only incomplete, noisy versions of the functions one wishes to analyze are observed. The estimation procedure used in the first part of the thesis requires that the functional predictors be noiselessly observed on a regular grid. In the second part of the dissertation, I restrict attention to the identity link-Gaussian error case and develop a Bayesian version of FGAM. This approach allows for the functional covariates to be sparsely observed and measured with error. I consider both Monte Carlo and variational Bayes methods for jointly fitting the FGAM with sparsely observed covariates and recovering the true functional predictors. Due to the complicated form of the model posterior distribution and full conditional distributions, standard Monte Carlo and variational Bayes algorithms cannot be used. As such, the work should be of independent interest to applied Bayesian statisticians. The numerical studies demonstrate the benefits of the proposed algorithms over a two-step approach of first recovering the complete trajectories using standard…
Advisors/Committee Members: Ruppert, David (chair), Matteson, David (committee member), Hooker, Giles J. (committee member), Resnick, Sidney Ira (committee member).
Subjects/Keywords: Functional data analysis; Generalized additive models; Scalar on function regression
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
McLean, M. (2013). On Generalized Additive Models For Regression With Functional Data. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/34323
Chicago Manual of Style (16th Edition):
McLean, Mathew. “On Generalized Additive Models For Regression With Functional Data.” 2013. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/34323.
MLA Handbook (7th Edition):
McLean, Mathew. “On Generalized Additive Models For Regression With Functional Data.” 2013. Web. 21 Apr 2021.
Vancouver:
McLean M. On Generalized Additive Models For Regression With Functional Data. [Internet] [Doctoral dissertation]. Cornell University; 2013. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/34323.
Council of Science Editors:
McLean M. On Generalized Additive Models For Regression With Functional Data. [Doctoral Dissertation]. Cornell University; 2013. Available from: http://hdl.handle.net/1813/34323
11.
Risk, Benjamin.
Topics In Independent Component Analysis, Likelihood Component Analysis, And Spatiotemporal Mixed Modeling.
Degree: PhD, Statistics, 2015, Cornell University
URL: http://hdl.handle.net/1813/41072
► This dissertation explores dependence patterns using a range of statistical methods: from estimating latent factors in multivariate analysis to mixed modeling of spatially and temporally…
(more)
▼ This dissertation explores dependence patterns using a range of statistical methods: from estimating latent factors in multivariate analysis to mixed modeling of spatially and temporally dependent data. The methods may be applied to many scientific problems and types of data, but here we focus on the application to functional magnetic resonance imaging (fMRI). In the first chapter, we examine differences between independent component analyses (ICAs) arising from different assumptions, measures of dependence, and starting points of the algorithms. ICA is a popular method with diverse applications including artifact removal in electrophysiology data, feature extraction in microarray data, and identifying brain networks in functional magnetic resonance imaging (fMRI). ICA can be viewed as a generalization of principal component analysis (PCA) that takes into account higher-order cross-correlations. Whereas the PCA solution is unique, there are many ICA methods-whose solutions may differ. Infomax, FastICA, and JADE are commonly applied to fMRI studies, with FastICA being arguably the most popular. A previous study demonstrated that ProDenICA outperformed FastICA in simulations with two components. We introduce the application of ProDenICA to simulations with more components and to fMRI data. ProDenICA was more accurate in simulations, and we identified differences between biologically meaningful ICs from ProDenICA versus other methods in the fMRI analysis. ICA methods require non-convex optimization, yet current practices do not recognize the importance of, nor adequately address sensitivity to, initial values. We found that local optima led to dramatically different estimates in both simulations and group ICA of fMRI, and we provide evidence that the global optimum from ProDenICA is the best estimate. We applied a modification of the Hungarian (Kuhn-Munkres) algorithm to match ICs from multiple estimates, thereby gaining novel insights into how brain networks vary in their sensitivity to initial values and ICA method. The manuscript resulting from this research is co-authored by
David Matteson,
David Ruppert, Ani Eloyan (Johns Hopkins
University), and Brian Caffo (Johns Hopkins
University). In the second chapter, we develop a new approach for dimension reduction and latent variable estimation by maximizing a non-Gaussian likelihood. Independent component analysis (ICA) is popular in many applications, including cognitive neuroscience and signal processing. Due to computational constraints, principal component analysis is used for dimension reduction prior to ICA (PCA-ICA), which could remove important information. To address this issue, we propose likelihood component analysis (LCA) in which dimension reduction and latent variable estimation is achieved simultaneously by maximizing a likelihood with Gaussian and non-Gaussian components. We present a parametric model using the logistic density and a semi-parametric version using tilted Gaussians with cubic B-splines. We implement an algorithm scalable to datasets common…
Advisors/Committee Members: Ruppert,David (chair), Matteson,David (coChair), Booth,James (committee member), Bien,Jacob (committee member).
Subjects/Keywords: dimension reduction; spatiotemporal dependence; functional magnetic resonance imaging
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Risk, B. (2015). Topics In Independent Component Analysis, Likelihood Component Analysis, And Spatiotemporal Mixed Modeling. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/41072
Chicago Manual of Style (16th Edition):
Risk, Benjamin. “Topics In Independent Component Analysis, Likelihood Component Analysis, And Spatiotemporal Mixed Modeling.” 2015. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/41072.
MLA Handbook (7th Edition):
Risk, Benjamin. “Topics In Independent Component Analysis, Likelihood Component Analysis, And Spatiotemporal Mixed Modeling.” 2015. Web. 21 Apr 2021.
Vancouver:
Risk B. Topics In Independent Component Analysis, Likelihood Component Analysis, And Spatiotemporal Mixed Modeling. [Internet] [Doctoral dissertation]. Cornell University; 2015. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/41072.
Council of Science Editors:
Risk B. Topics In Independent Component Analysis, Likelihood Component Analysis, And Spatiotemporal Mixed Modeling. [Doctoral Dissertation]. Cornell University; 2015. Available from: http://hdl.handle.net/1813/41072
12.
Jin, Ze.
Measuring Statistical Dependence and Its Applications in Machine Learning.
Degree: PhD, Statistics, 2018, Cornell University
URL: http://hdl.handle.net/1813/59677
► My PhD research focuses on measuring and testing mutual dependence and conditional mean dependence, and applying it to Machine Learning problems, which is elaborated in…
(more)
▼ My PhD research focuses on measuring and testing mutual dependence and conditional mean dependence, and applying it to Machine Learning problems, which is elaborated in the following four chapters: Chapter 1 – We propose three new measures of mutual dependence between multiple random vectors. Each measure is zero if and only if the random vectors are mutually independent. The first generalizes distance covariance from pairwise dependence to mutual dependence, while the other two measures are sums of squared distance covariances. The proposed measures share similar properties and asymptotic distributions with distance covariance, and capture non-linear and non-monotone mutual dependence between the random vectors. Inspired by complete and incomplete V-statistics, we define empirical and simplified empirical measures as a trade-off between the complexity and statistical power when testing mutual independence. The implementation of corresponding tests is demonstrated by both simulation results and real data examples. Chapter 2 – We apply both distance-based and kernel-based mutual dependence measures to independent component analysis (ICA), and generalize dCovICA to MDMICA, minimizing empirical dependence measures as an objective function in both deflation and parallel manners. Solving this minimization problem, we introduce Latin hypercube sampling (LHS), and a global optimization method, Bayesian optimization (BO) to improve the initialization of the Newton-type local optimization method. The performance of MDMICA is evaluated in various simulation studies and an image data example. When the ICA model is correct, MDMICA achieves competitive results compared to existing approaches. When the ICA model is misspecified, the estimated independent components are less mutually dependent than the observed components using MDMICA, while they are prone to be even more mutually dependent than the observed components using other approaches. Chapter 3 – Independent component analysis (ICA) decomposes multivariate data into mutually independent components (ICs). The ICA model is subject to a constraint that at most one of these components is Gaussian, which is required for model identifiability. Linear non-Gaussian component analysis (LNGCA) generalizes the ICA model to a linear latent factor model with any number of both non-Gaussian components (signals) and Gaussian components (noise), where observations are linear combinations of independent components. Although the individual Gaussian components are not identifiable, the Gaussian subspace is identifiable. We introduce an estimator along with its optimization approach in which non-Gaussian and Gaussian components are estimated simultaneously, maximizing the discrepancy of each non-Gaussian component from Gaussianity while minimizing the discrepancy of each Gaussian component from Gaussianity. When the number of non-Gaussian components is unknown, we develop a statistical test to determine it based on resampling and the discrepancy of estimated components. Through a variety of…
Advisors/Committee Members: Matteson, David (chair), Ruppert, David (committee member), Weinberger, Kilian Quirin (committee member).
Subjects/Keywords: Statistics; Mathematics; Computer science; conditional mean independence; independent component analysis; linear non-Gaussian component analysis; multivariate analysis; mutual independence; V-statistics
…and Cornell University Atkinson Center
for a Sustainable Future (AVF-2017)…
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jin, Z. (2018). Measuring Statistical Dependence and Its Applications in Machine Learning. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/59677
Chicago Manual of Style (16th Edition):
Jin, Ze. “Measuring Statistical Dependence and Its Applications in Machine Learning.” 2018. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/59677.
MLA Handbook (7th Edition):
Jin, Ze. “Measuring Statistical Dependence and Its Applications in Machine Learning.” 2018. Web. 21 Apr 2021.
Vancouver:
Jin Z. Measuring Statistical Dependence and Its Applications in Machine Learning. [Internet] [Doctoral dissertation]. Cornell University; 2018. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/59677.
Council of Science Editors:
Jin Z. Measuring Statistical Dependence and Its Applications in Machine Learning. [Doctoral Dissertation]. Cornell University; 2018. Available from: http://hdl.handle.net/1813/59677
13.
Kowal, Daniel Ryan.
Bayesian Methods for Functional and Time Series Data.
Degree: PhD, Statistics, 2017, Cornell University
URL: http://hdl.handle.net/1813/56965
► We introduce new Bayesian methodology for modeling functional and time series data. While broadly applicable, the methodology focuses on the challenging cases in which (1)…
(more)
▼ We introduce new Bayesian methodology for modeling functional and time series data. While broadly applicable, the methodology focuses on the challenging cases in which (1) functional data exhibit additional dependence, such as time dependence or contemporaneous dependence; (2) functional or time series data demonstrate local features, such as jumps or rapidly-changing smoothness; and (3) a time series of functional data is observed sparsely or irregularly with non-negligible measurement error. A unifying characteristic of the proposed methods is the employment of the dynamic linear model (DLM) framework in new contexts to construct highly efficient Gibbs sampling algorithms.
To model dependent functional data, we extend DLMs for multivariate time series data to the functional data setting, and identify a smooth, time-invariant functional basis for the functional observations. The proposed model provides flexible modeling of complex dependence structures among the functional observations, such as time dependence, contemporaneous dependence, stochastic volatility, and covariates. We apply the model to multi-economy yield curve data and local field potential brain signals in rats.
For locally adaptive Bayesian time series and regression analysis, we propose a novel class of dynamic shrinkage processes. We extend a broad class of popular global-local shrinkage priors, such as the horseshoe prior, to the dynamic setting by allowing the local scale parameters to depend on the history of the shrinkage process. We prove that the resulting processes inherit desirable shrinkage behavior from the non-dynamic analogs, but provide additional locally adaptive shrinkage properties. We demonstrate the substantial empirical gains from the proposed dynamic shrinkage processes using extensive simulations, a Bayesian trend filtering model for irregular curve-fitting of CPU usage data, and an adaptive time-varying parameter regression model, which we employ to study the dynamic relevance of the factors in the Fama-French asset pricing model.
Finally, we propose a hierarchical functional autoregressive (FAR) model with Gaussian process innovations for forecasting and inference of sparsely or irregularly sampled functional time series data. We prove finite-sample forecasting and interpolation optimality properties of the proposed model, which remain valid with the Gaussian assumption relaxed. We apply the proposed methods to produce highly competitive forecasts of daily U.S. nominal and real yield curves.
Advisors/Committee Members: Ruppert, David (chair), Matteson, David (committee member), Jarrow, Robert A. (committee member), Wells, Martin Timothy (committee member).
Subjects/Keywords: Gaussian Process; Hierarchical Bayes; Yield Curve; Finance; Statistics; Dynamic Linear Model; Factor Model
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kowal, D. R. (2017). Bayesian Methods for Functional and Time Series Data. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/56965
Chicago Manual of Style (16th Edition):
Kowal, Daniel Ryan. “Bayesian Methods for Functional and Time Series Data.” 2017. Doctoral Dissertation, Cornell University. Accessed April 21, 2021.
http://hdl.handle.net/1813/56965.
MLA Handbook (7th Edition):
Kowal, Daniel Ryan. “Bayesian Methods for Functional and Time Series Data.” 2017. Web. 21 Apr 2021.
Vancouver:
Kowal DR. Bayesian Methods for Functional and Time Series Data. [Internet] [Doctoral dissertation]. Cornell University; 2017. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1813/56965.
Council of Science Editors:
Kowal DR. Bayesian Methods for Functional and Time Series Data. [Doctoral Dissertation]. Cornell University; 2017. Available from: http://hdl.handle.net/1813/56965
.