You searched for +publisher:"Texas A&M University" +contributor:("Mallick, Bani")
.
Showing records 1 – 30 of
77 total matches.
◁ [1] [2] [3] ▶

Texas A&M University
1.
Peterson, Jacob Ross.
Exponentially-convergent Monte Carlo for the One-dimensional Transport Equation.
Degree: MS, Nuclear Engineering, 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/152727
► An exponentially-convergent Monte Carlo (ECMC) method is analyzed using the one-group, one-dimension, slab-geometry transport equation. The method is based upon the use of a linear…
(more)
▼ An exponentially-convergent Monte Carlo (ECMC) method is analyzed using the one-group, one-dimension, slab-geometry transport equation. The method is based upon the use of a linear discontinuous finite-element trial space in position and direction to represent the transport solution. A space-angle h-adaptive algorithm is employed to maintain exponential convergence after stagnation occurs due to in- adequate trial-space resolution. In addition, a biased sampling algorithm is used to adequately converge singular problems. Computational results are presented demonstrating the efficacy of the new approach. We tested our ECMC algorithm against standard Monte Carlo and found the ECMC method to be generally much more efficient. For a manufacture solution the ECMC algorithm was roughly 200 times more effective than the standard Monte Carlo. When considering a highly singular pure attenuation problem, the ECMC method was roughly 4000 times more effective.
Advisors/Committee Members: Morel, Jim (advisor), Ragusa, Jean (advisor), Mallick, Bani (committee member).
Subjects/Keywords: Monte Carlo; Geometric Monte Carlo; Exponential Monte Carlo
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Peterson, J. R. (2014). Exponentially-convergent Monte Carlo for the One-dimensional Transport Equation. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/152727
Chicago Manual of Style (16th Edition):
Peterson, Jacob Ross. “Exponentially-convergent Monte Carlo for the One-dimensional Transport Equation.” 2014. Masters Thesis, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/152727.
MLA Handbook (7th Edition):
Peterson, Jacob Ross. “Exponentially-convergent Monte Carlo for the One-dimensional Transport Equation.” 2014. Web. 07 Mar 2021.
Vancouver:
Peterson JR. Exponentially-convergent Monte Carlo for the One-dimensional Transport Equation. [Internet] [Masters thesis]. Texas A&M University; 2014. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/152727.
Council of Science Editors:
Peterson JR. Exponentially-convergent Monte Carlo for the One-dimensional Transport Equation. [Masters Thesis]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/152727

Texas A&M University
2.
Dorn, Mary Frances.
Semiparametric Classification under a Forest Density Assumption.
Degree: PhD, Statistics, 2017, Texas A&M University
URL: http://hdl.handle.net/1969.1/161361
► This dissertation proposes a new semiparametric approach for binary classification that exploits the modeling flexibility of sparse graphical models. This approach is based on non-parametrically…
(more)
▼ This dissertation proposes a new semiparametric approach for binary classification that exploits the modeling flexibility of sparse graphical models. This approach is based on non-parametrically estimated densities, which are notoriously difficult to obtain when the number of dimensions is even moderately large. In this work, it is assumed that each class can be well-represented by a family of undirected sparse graphical models, specifically a forest-structured distribution. By making this assumption, non-parametric estimation of only one- and two-dimensional marginal densities are required to transform the data into a space where a linear classifier is optimal.
This work proves convergence results for the forest density classifier under certain conditions. Its performance is illustrated by comparing it to several state-of-the-art classifiers on simulated forest-distributed data as well as a panel of real datasets from different domains. These experiments indicate that the proposed method is competitive with popular methods across a wide range of applications.
Advisors/Committee Members: Spiegelman, Cliff (advisor), Bryant, Vaughn (committee member), Mallick, Bani (committee member), Johnson, Valen (committee member).
Subjects/Keywords: classification; nonparametric density estimation; forests; machine learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Dorn, M. F. (2017). Semiparametric Classification under a Forest Density Assumption. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/161361
Chicago Manual of Style (16th Edition):
Dorn, Mary Frances. “Semiparametric Classification under a Forest Density Assumption.” 2017. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/161361.
MLA Handbook (7th Edition):
Dorn, Mary Frances. “Semiparametric Classification under a Forest Density Assumption.” 2017. Web. 07 Mar 2021.
Vancouver:
Dorn MF. Semiparametric Classification under a Forest Density Assumption. [Internet] [Doctoral dissertation]. Texas A&M University; 2017. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/161361.
Council of Science Editors:
Dorn MF. Semiparametric Classification under a Forest Density Assumption. [Doctoral Dissertation]. Texas A&M University; 2017. Available from: http://hdl.handle.net/1969.1/161361

Texas A&M University
3.
Konomi, Bledar.
Bayesian Spatial Modeling of Complex and High Dimensional Data.
Degree: PhD, Statistics, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10267
► The main objective of this dissertation is to apply Bayesian modeling to different complex and high-dimensional spatial data sets. I develop Bayesian hierarchical spatial models…
(more)
▼ The main objective of this dissertation is to apply Bayesian modeling to different complex and high-dimensional spatial data sets. I develop Bayesian hierarchical spatial models for both the observed location and the observation variable. Throughout this dissertation I execute the inference of the posterior distributions using Markov chain Monte Carlo by developing computational strategies that can reduce the computational cost.
I start with a "high level" image analysis by modeling the pixels with a Gaussian process and the objects with a marked-point process. The proposed method is an automatic image segmentation and classification procedure which simultaneously detects the boundaries and classifies the objects in the image into one of the predetermined shape families. Next, I move my attention to the piecewise non-stationary Gaussian process models and their computational challenges for very large data sets. I simultaneously model the non-stationarity and reduce the computational cost by using the innovative technique of full-scale approximation. I successfully demonstrate the proposed reduction technique to the Total Ozone Matrix Spectrometer (TOMS) data. Furthermore, I extend the reduction method for the non-stationary Gaussian process models to a dynamic partition of the space by using a modified Treed Gaussian Model. This modification is based on the use of a non-stationary function and the full-scale approximation. The proposed model can deal with piecewise non-stationary geostatistical data with unknown partitions. Finally, I apply the method to the TOMS data to explore the non-stationary nature of the data.
Advisors/Committee Members: Mallick, Bani K. (advisor), Sang, Huiyan (advisor), Huang, Jianhua (committee member), Efendiev, Yalchin (committee member).
Subjects/Keywords: Object classification; Image
segmentation; Nanoparticles; Markov-chain Monte-carlo; Bayesian shape analysis; Predictive process; Full-scale approximation; Bayesian treed Gaussian process
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Konomi, B. (2012). Bayesian Spatial Modeling of Complex and High Dimensional Data. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10267
Chicago Manual of Style (16th Edition):
Konomi, Bledar. “Bayesian Spatial Modeling of Complex and High Dimensional Data.” 2012. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10267.
MLA Handbook (7th Edition):
Konomi, Bledar. “Bayesian Spatial Modeling of Complex and High Dimensional Data.” 2012. Web. 07 Mar 2021.
Vancouver:
Konomi B. Bayesian Spatial Modeling of Complex and High Dimensional Data. [Internet] [Doctoral dissertation]. Texas A&M University; 2012. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10267.
Council of Science Editors:
Konomi B. Bayesian Spatial Modeling of Complex and High Dimensional Data. [Doctoral Dissertation]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10267

Texas A&M University
4.
Aldossary, Mubarak Nasser.
The Value of Assessing Uncertainty.
Degree: PhD, Petroleum Engineering, 2016, Texas A&M University
URL: http://hdl.handle.net/1969.1/156856
► Despite the perception of lucrative earnings in the oil industry, various authors have noted that industry performance is routinely below expectations. For example, the average…
(more)
▼ Despite the perception of lucrative earnings in the oil industry, various authors have noted that industry performance is routinely below expectations. For example, the average reported return for the industry was around 7% in the 1990s, even though a typical project hurdle rate was at least 15%. The underperformance is generally attributed to poor project evaluation and selection due to chronic bias. While a number of authors have investigated cognitive biases in oil and gas project evaluation, there have been few quantitative studies of the impact of biases on economic performance. Incomplete investigation and possible underestimation of the impact of biases in project evaluation and selection are at least partially responsible for persistence of these biases.
The objectives of this work were to determine quantitatively the value of assessing uncertainty or, alternatively, the cost of underestimating uncertainty. This work presents a new framework for assessing the monetary impact of overconfidence bias and directional bias (i.e., optimism or pessimism) on portfolio performance. For moderate amounts of overconfidence and optimism, expected disappointment (having realized NPV less than estimated NPV) was 30-35% of estimated NPV for typical industry portfolios and optimization cases. Greater degrees of overconfidence and optimism resulted in expected disappointments approaching 100% of estimated NPV. Comparison of simulation results with expected industry performance in the 1990s, indicates that these greater degrees of overconfidence and optimism have been experienced in the industry.
The value of reliably quantifying uncertainty is in reducing or eliminating expected disappointment and expected decision error (selecting the wrong projects), which is achieved by focusing primarily on elimination of overconfidence; other biases are taken care of in the process. Elimination of expected disappointment will improve industry performance overall to the extent that superior projects are available and better quantification of uncertainty allows identification of these superior projects.
Advisors/Committee Members: McVay, Duane (advisor), Lee, John (committee member), Gildin, Eduardo (committee member), Mallick, Bani (committee member).
Subjects/Keywords: uncertainty; underestimating uncertainty; cognitive biases; overconfidence; optimism; pessimism; portfolios optimization; expected disappointment; optimizer's curse; expected decision error; directional bias
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Aldossary, M. N. (2016). The Value of Assessing Uncertainty. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/156856
Chicago Manual of Style (16th Edition):
Aldossary, Mubarak Nasser. “The Value of Assessing Uncertainty.” 2016. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/156856.
MLA Handbook (7th Edition):
Aldossary, Mubarak Nasser. “The Value of Assessing Uncertainty.” 2016. Web. 07 Mar 2021.
Vancouver:
Aldossary MN. The Value of Assessing Uncertainty. [Internet] [Doctoral dissertation]. Texas A&M University; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/156856.
Council of Science Editors:
Aldossary MN. The Value of Assessing Uncertainty. [Doctoral Dissertation]. Texas A&M University; 2016. Available from: http://hdl.handle.net/1969.1/156856

Texas A&M University
5.
Li, Furong.
Statistical Inference for Large Spatial Data.
Degree: PhD, Statistics, 2017, Texas A&M University
URL: http://hdl.handle.net/1969.1/174900
► The availability of large spatial and spatial-temporal data geocoded at accurate locations has fueled increasing interest in spatial modeling and analysis. In this dissertation, we…
(more)
▼ The availability of large spatial and spatial-temporal data geocoded at accurate locations has fueled increasing interest in spatial modeling and analysis. In this dissertation, we present one study concerning the inference on properties of a single spatial process, and then turn to multiple processes and provide two modeling approaches exploring the spatially varying relationship between covariates and the response variable of interest. In the first study, we investigate the inference tool based on quasi-likelihood, composite likelihood (CL) method and propose a new weighting scheme to construct a CL for the inference of spatial Gaussian process models. This weight function approximates the optimal weight derived from the theory of estimating equations. It combines block-diagonal approximation and tapering strategy to facilitate computations. Gains in statistical and computational efficiency over existing CL methods are illustrated through simulation studies.
The second investigation is the development of a new spatial modeling framework to capture the spatial structure, especially clustered structure in the relationship between response variable and explanatory variables. The proposed method, called Spatially Clustered Coefficient(SCC) regression, results in estimators of varying coefficients, which conveys important information about the changing pattern of the relationship. The SCC method works very effectively in estimation for data either with clustered coefficients or smoothly-varying coefficients, based on our simulation results. Thus, it allows the researchers to explore the spatial structure in the regression coefficient without any priori information. We also derive some oracle inequalities, which provides non-asymptotic error bounds on estimators and predictors. An application of the SCC method to temperature and salinity data in the Atlantic basin is provided for illustration.
Motivated by the studies in Geoscience that the influence of turbulent heat flux on sea surface temperature (SST) varies at different spatial scales, we develop a statistical model to quantify the continuous dependence of SST-turbulent heat flux relationship (T-Q relationship) on spatial scales. In particular, we propose a penalized regression model in the spectral domain to estimate the changing relationship with spatial scales. While application to T-Q relationship is the main motivation for this work, it should be emphasized that the penalized spectral regression framework is general and thus is applicable to other phenomena of interest as well.
Advisors/Committee Members: Sang, Huiyan (advisor), Longnecker, Michael (committee member), Mallick, Bani (committee member), Saravanan, Ramalingam (committee member).
Subjects/Keywords: Weighted composite likelihood; Spatially clustered coefficient regression; Penalized spectral regression.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, F. (2017). Statistical Inference for Large Spatial Data. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/174900
Chicago Manual of Style (16th Edition):
Li, Furong. “Statistical Inference for Large Spatial Data.” 2017. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/174900.
MLA Handbook (7th Edition):
Li, Furong. “Statistical Inference for Large Spatial Data.” 2017. Web. 07 Mar 2021.
Vancouver:
Li F. Statistical Inference for Large Spatial Data. [Internet] [Doctoral dissertation]. Texas A&M University; 2017. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/174900.
Council of Science Editors:
Li F. Statistical Inference for Large Spatial Data. [Doctoral Dissertation]. Texas A&M University; 2017. Available from: http://hdl.handle.net/1969.1/174900

Texas A&M University
6.
Payne, Richard Daniel.
Two-Stage Metropolis Hastings; Bayesian Conditional Density Estimation & Survival Analysis via Partition Modeling, Laplace Approximations, and Efficient Computation.
Degree: PhD, Statistics, 2018, Texas A&M University
URL: http://hdl.handle.net/1969.1/173405
► Bayesian statistical methods are known for their flexibility in modeling. This flexibility is possible because parameters can often be estimated via Markov chain Monte Carlo…
(more)
▼ Bayesian statistical methods are known for their flexibility in modeling. This flexibility is possible because parameters can often be estimated via Markov chain Monte Carlo methods. In large datasets or models with many parameters, Markov chain Monte Carlo methods are insufficient and inefficient. We introduce the two-stage Metropolis-Hastings algorithm which modifies the proposal distribution of the Metropolis-Hastings algorithm via a screening stage to reduce the computational cost. The screening stage requires a cheap estimate of the log-likelihood and speeds up computation even in complex models such as Bayesian multivariate adaptive regression splines. Next, a partition model, constructed from a Voronoi tessellation, is proposed for conditional density estimation using logistic Gaussian processes. A Laplace approximation is used to approximate the marginal likelihood providing a tractable Markov chain Monte Carlo algorithm. In simulations and an application to windmill power output, the model successfully provides interpretation and flexibly models the densities. Last, a Bayesian tree partition model is proposed to model the hazard function of survival & reliability models. The piecewise-constant hazard function in each partition element is modeled via a latent Gaussian process. The marginal likelihood is estimated using Laplace approximations to yield a tractable reversible jump Markov chain Monte Carlo algorithm. The method is successful in simulations and provides insight into lung cancer survival rates in relation to protein expression levels.
Advisors/Committee Members: Mallick, Bani K (advisor), Bhattacharya, Anirban (committee member), Ding, Yu (committee member), Huang, Jianhua (committee member).
Subjects/Keywords: Bayesian statistics; Laplace approximation; partition model; Gaussian process; Markov chain Monte Carlo; survival analysis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Payne, R. D. (2018). Two-Stage Metropolis Hastings; Bayesian Conditional Density Estimation & Survival Analysis via Partition Modeling, Laplace Approximations, and Efficient Computation. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/173405
Chicago Manual of Style (16th Edition):
Payne, Richard Daniel. “Two-Stage Metropolis Hastings; Bayesian Conditional Density Estimation & Survival Analysis via Partition Modeling, Laplace Approximations, and Efficient Computation.” 2018. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/173405.
MLA Handbook (7th Edition):
Payne, Richard Daniel. “Two-Stage Metropolis Hastings; Bayesian Conditional Density Estimation & Survival Analysis via Partition Modeling, Laplace Approximations, and Efficient Computation.” 2018. Web. 07 Mar 2021.
Vancouver:
Payne RD. Two-Stage Metropolis Hastings; Bayesian Conditional Density Estimation & Survival Analysis via Partition Modeling, Laplace Approximations, and Efficient Computation. [Internet] [Doctoral dissertation]. Texas A&M University; 2018. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/173405.
Council of Science Editors:
Payne RD. Two-Stage Metropolis Hastings; Bayesian Conditional Density Estimation & Survival Analysis via Partition Modeling, Laplace Approximations, and Efficient Computation. [Doctoral Dissertation]. Texas A&M University; 2018. Available from: http://hdl.handle.net/1969.1/173405

Texas A&M University
7.
Chakraborty, Antik.
Bayesian Shrinkage: Computation, Methods and Theory.
Degree: PhD, Statistics, 2018, Texas A&M University
URL: http://hdl.handle.net/1969.1/174156
► Sparsity is a standard structural assumption that is made while modeling high-dimensional statistical parameters. This assumption essentially entails a lower dimensional embedding of the high-dimensional…
(more)
▼ Sparsity is a standard structural assumption that is made while modeling high-dimensional
statistical parameters. This assumption essentially entails a lower dimensional embedding of the
high-dimensional parameter thus enabling sound statistical inference. Apart from this obvious
statistical motivation, in many modern applications of statistics such as Genomics, Neuroscience
etc. parameters of interest are indeed of this nature.
For over almost two decades, spike and slab type priors have been the Bayesian gold standard
for modeling of sparsity. However, due to their computational bottlenecks shrinkage priors have
emerged as a powerful alternative. This family of priors can almost exclusively be represented
as a scale mixture of Gaussian distribution and posterior Markov chain Monte Carlo (MCMC)
updates of related parameters are then relatively easy to design. Although shrinkage priors were
tipped as having computational scalability in high-dimensions, when the number of parameters is
in thousands or more, they do come with their own computational challenges. Standard MCMC
algorithms implementing shrinkage priors generally scale cubic in the dimension of the parameter
making real life application of these priors severely limited. The first chapter of this document
addresses this computational issue and proposes an alternative exact posterior sampling algorithm
complexity of which that linearly in the ambient dimension.
The algorithm developed in the first chapter is specifically designed for regression problems.
However, simple modifications of it allows tackling other high-dimensional problems where these
priors have found little application. In the second chapter, we develop a Bayesian method based
on shrinkage priors for high-dimensional multiple response response regression. We show how
proper shrinkage may be used for modeling high-dimensional low-rank matrices. Unlike spike
and slab type priors, shrinkage priors are unable to produce exact zeros in the posterior. In this
chapter we also devise two independent post MCMC processing schemes based on the idea of
soft-thresholding with default choices of tuning parameters. This post processing steps provide
exact estimates of the row and rank sparsity in the parameter matrix.
Theoretical study of the posterior convergence rates using shrinkage priors are relatively underdeveloped.
While we do not attempt to provide a unifying foundation to study these properties,
in chapter three we choose a specific member of the shrinkage family known as the horseshoe prior
and study its convergence rates in several high-dimensional models. These results are completely
new in the literature and also establish the horseshoe priors’ optimality in the minimax sense in
high-dimensional problems.
Advisors/Committee Members: Mallick, Bani Kumar (advisor), Bhattacharya, Anirban (advisor), Carroll, Raymond Jerome (committee member), Sivakumar, Natarajan (committee member).
Subjects/Keywords: High-dimensional; Sparsity; Shrinkage priors; Low-rank; Convergence rates; Factor models; Regression
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chakraborty, A. (2018). Bayesian Shrinkage: Computation, Methods and Theory. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/174156
Chicago Manual of Style (16th Edition):
Chakraborty, Antik. “Bayesian Shrinkage: Computation, Methods and Theory.” 2018. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/174156.
MLA Handbook (7th Edition):
Chakraborty, Antik. “Bayesian Shrinkage: Computation, Methods and Theory.” 2018. Web. 07 Mar 2021.
Vancouver:
Chakraborty A. Bayesian Shrinkage: Computation, Methods and Theory. [Internet] [Doctoral dissertation]. Texas A&M University; 2018. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/174156.
Council of Science Editors:
Chakraborty A. Bayesian Shrinkage: Computation, Methods and Theory. [Doctoral Dissertation]. Texas A&M University; 2018. Available from: http://hdl.handle.net/1969.1/174156

Texas A&M University
8.
De, Debkumar.
Essays on Bayesian Time Series and Variable Selection.
Degree: PhD, Statistics, 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/152793
► Estimating model parameters in dynamic model continues to be challenge. In my dissertation, we have introduced a Stochastic Approximation based parameter estimation approach under Ensemble…
(more)
▼ Estimating model parameters in dynamic model continues to be challenge. In my dissertation, we have introduced a Stochastic Approximation based parameter estimation approach under Ensemble Kalman Filter set-up. Asymptotic properties of the resultant estimates are discussed here. We have compared our proposed method to current methods via simulation studies. We have demonstrated predictive performance of our proposed method on a large spatio-temporal data.
In my other topic, we presented a method for simultaneous estimation of regression parameters and the covariance matrix, developed for a nonparametric Seemingly Unrelated Regression problem. This is a very flexible modeling technique that essentially performs a sparse high-dimensional multiple predictor(p), multiple responses(q) regression where the responses may be correlated. Such data appear abundantly in the fields of genomics, finance and econometrics. We illustrate and compare performances of our proposed techniques with previous analyses using both simulated and real multivariate data arising in econometrics and government.
Advisors/Committee Members: Liang, Faming (advisor), Mallick, Bani K (advisor), Pourahmadi , Mohsen (committee member), Datta-Gupta, Akhil (committee member).
Subjects/Keywords: Ensemble Kalman Filter; Stochastic Approximation; Non-parametric Regression; Matrix variate regression; Variable selection
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
De, D. (2014). Essays on Bayesian Time Series and Variable Selection. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/152793
Chicago Manual of Style (16th Edition):
De, Debkumar. “Essays on Bayesian Time Series and Variable Selection.” 2014. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/152793.
MLA Handbook (7th Edition):
De, Debkumar. “Essays on Bayesian Time Series and Variable Selection.” 2014. Web. 07 Mar 2021.
Vancouver:
De D. Essays on Bayesian Time Series and Variable Selection. [Internet] [Doctoral dissertation]. Texas A&M University; 2014. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/152793.
Council of Science Editors:
De D. Essays on Bayesian Time Series and Variable Selection. [Doctoral Dissertation]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/152793

Texas A&M University
9.
Roh, Soojin.
Robust Ensemble Kalman Filters and Localization for Multiple State Variables.
Degree: PhD, Statistics, 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/153268
► Ensemble Kalman filters (EnKF) is a statistical technique used to estimate the state of a nonlinear spatio-temporal dynamical system. This dissertation consists of three parts.…
(more)
▼ Ensemble Kalman filters (EnKF) is a statistical technique used to estimate the state of a nonlinear spatio-temporal dynamical system. This dissertation consists of three parts. First, we develop a methodology to make EnKF robust, based on the employment of robust statistics. This methodology is necessary, since current EnKF algorithms tend to be sensitive to gross observation errors caused by technical or human errors during the data collection process, resulting in large biases or error variances. Second, we discuss the localization in the EnKF algorithms for simultaneous estimation of multiple state variables. The localization of the background-error
covariance has proven to be an efficient method in reducing the sampling errors and compensating with the underestimation of the background error covariance terms. For a system of multiple state variables, the localization should be carefully applied in order to guarantee positive-definiteness of the matrices of the filtered background-error covariances. Rigorous localization methods for the case of multiple state variables, however, have rarely been considered in the literature. We introduce a number of localization filters that ensure that the background-error covariance matrix is positive-definite. Lastly, we extend the proposed robust method to both linear and nonlinear dynamical systems of multiple state variables.
Advisors/Committee Members: Jun, Mikyoung (advisor), Genton, Marc G. (advisor), Mallick, Bani (committee member), Szunyogh, Istvan (committee member).
Subjects/Keywords: Ensemble Kalman filter; Robust; Multivariate Localization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Roh, S. (2014). Robust Ensemble Kalman Filters and Localization for Multiple State Variables. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/153268
Chicago Manual of Style (16th Edition):
Roh, Soojin. “Robust Ensemble Kalman Filters and Localization for Multiple State Variables.” 2014. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/153268.
MLA Handbook (7th Edition):
Roh, Soojin. “Robust Ensemble Kalman Filters and Localization for Multiple State Variables.” 2014. Web. 07 Mar 2021.
Vancouver:
Roh S. Robust Ensemble Kalman Filters and Localization for Multiple State Variables. [Internet] [Doctoral dissertation]. Texas A&M University; 2014. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/153268.
Council of Science Editors:
Roh S. Robust Ensemble Kalman Filters and Localization for Multiple State Variables. [Doctoral Dissertation]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/153268

Texas A&M University
10.
Wang, Yanqing.
Relative Risks Analysis in Nutritional Epidemiology.
Degree: PhD, Statistics, 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/153663
► Motivated by a logistic regression problem involving diet and cancer, we reconsider the problem of forming a confidence interval for the ratio of two location…
(more)
▼ Motivated by a logistic regression problem involving diet and cancer, we reconsider the problem of forming a confidence interval for the ratio of two location parameters. We develop a new methodology, which we call the Direct Integral Method for Ratios (DIMER). In simulations, we compare this method to many others, including Wald's method, Fieller's interval, Hayya's method, the nonparametric bootstrap and the parametric bootstrap. These simulations show that, generally, DIMER more closely achieves the nominal confidence level, and in those cases that the other methods achieve the nominal levels, DIMER generally has smaller confidence interval lengths. We also show that DIMER eliminates the probability of infinite length or enormous length confidence intervals, something that can occur in Fieller's interval.
Furthermore, we study the real Healthy Eating Index-2005 (HEI-2005) data set from the NIH-AARP Study of Diet and Health, consider a weighted logistic regression model in which there are multiple subpopulations, and multiple diseases within each subpopulation. Based on this model, we present six different approaches to form the confidence intervals for the relative risks of different diseases in different subpopulations, including DIMER. The asymptotic distributions of the estimates for the log(relative risks) by the maximum likelihood and the nonparametric bootstrap method are provided. Next, the algorithms are presented to perform hypothesis tests and likelihood ratio tests to check there are significant differences between our proposed model and the other three logistic regression models or not. In addition, the adaptive lasso and an estimator with bounded constrains are described for variable selection and a novel algorithm to solve the nonlinear regression model with L1 norm penalty is proposed. The application of all those methods to the HEI-2005 data are illustrated.
Additionally, we expand the linear function of nutrition components inside the logistic regression model to a nonlinear case. More than that, we consider there are some limitations from the knowledge of biology and nutrition and propose a logistic regression model involving I-spline basis functions and an algorithm to solve it. Application to the real HEI-200d data set and comparison to a logistic model with total HEI scores are also presented.
Advisors/Committee Members: Carroll, Raymond (advisor), Mallick, Bani (advisor), Baladandayuthapani, Veera (committee member), Braga-Neto, Ulisses (committee member).
Subjects/Keywords: Adaptive Lasso; DIMER; Direct Integral Method for Ratios; HEI-2005; I-spline; Relative Risk; Variable Selection.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, Y. (2014). Relative Risks Analysis in Nutritional Epidemiology. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/153663
Chicago Manual of Style (16th Edition):
Wang, Yanqing. “Relative Risks Analysis in Nutritional Epidemiology.” 2014. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/153663.
MLA Handbook (7th Edition):
Wang, Yanqing. “Relative Risks Analysis in Nutritional Epidemiology.” 2014. Web. 07 Mar 2021.
Vancouver:
Wang Y. Relative Risks Analysis in Nutritional Epidemiology. [Internet] [Doctoral dissertation]. Texas A&M University; 2014. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/153663.
Council of Science Editors:
Wang Y. Relative Risks Analysis in Nutritional Epidemiology. [Doctoral Dissertation]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/153663

Texas A&M University
11.
Xia, Xiaoyang.
History-Matching Production Data Using Ensemble Smoother with Multiple Data Assimilation: A Comparative Study.
Degree: MS, Petroleum Engineering, 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/154226
► Reservoir simulation models are generated by petroleum engineers to optimize field operation and production, thus maximizing oil recovery. History matching methods are extensively used for…
(more)
▼ Reservoir simulation models are generated by petroleum engineers to optimize field operation and production, thus maximizing oil recovery. History matching methods are extensively used for reservoir model calibration and petrophysical properties estimation by matching numerical simulation results with true oil production history. Sequential reservoir model updating technique Ensemble Kalman filter (EnKF) has gained popularity in automatic history matching because of simple conceptual formulation and ease of implementation. The computational cost is relatively affordable compared with other sophisticated assimilation methods. Ensemble Smoother is a viable alternative of EnKF. Unlike EnKF, Ensemble Smoother computes a global update by simultaneously assimilating all data available and provides a significant reduction in simulation time. However, Ensemble Smoother typically yields a data match significantly inferior to that obtained with EnKF. Ensemble smoother with multiple data assimilation (ES-MDA) is developed as efficient iterative forms of Ensemble Smoother to compare with conventional EnKF.
For ES-MDA the same set of data is assimilated multiple times with an inflated covariance matrix of the measurement error. We apply ES-MDA and EnKF to generate multiple realizations of the permeability field by history matching production data including bottom-hole pressure, water-cut and gas-oil ratio. Both algorithms have been implemented to synthetic heterogeneous case and Goldsmith field case. Moreover, ES-MDA coupled with various covariance localization methods: Distance based, Streamline based and Hierarchical ensemble filter localization methods are compared in terms of both quality of history matching and permeability distribution.
Advisors/Committee Members: Datta-gupta, Akhil (advisor), King, Michael (committee member), Mallick, Bani (committee member).
Subjects/Keywords: Ensemble Smoother with Multiple Data Assimilation; History Matching
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xia, X. (2014). History-Matching Production Data Using Ensemble Smoother with Multiple Data Assimilation: A Comparative Study. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/154226
Chicago Manual of Style (16th Edition):
Xia, Xiaoyang. “History-Matching Production Data Using Ensemble Smoother with Multiple Data Assimilation: A Comparative Study.” 2014. Masters Thesis, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/154226.
MLA Handbook (7th Edition):
Xia, Xiaoyang. “History-Matching Production Data Using Ensemble Smoother with Multiple Data Assimilation: A Comparative Study.” 2014. Web. 07 Mar 2021.
Vancouver:
Xia X. History-Matching Production Data Using Ensemble Smoother with Multiple Data Assimilation: A Comparative Study. [Internet] [Masters thesis]. Texas A&M University; 2014. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/154226.
Council of Science Editors:
Xia X. History-Matching Production Data Using Ensemble Smoother with Multiple Data Assimilation: A Comparative Study. [Masters Thesis]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/154226

Texas A&M University
12.
Xue, Jingnan.
Robust Model-free Variable Screening, Double-parallel Monte Carlo and Average Bayesian Information Criterion.
Degree: PhD, Statistics, 2017, Texas A&M University
URL: http://hdl.handle.net/1969.1/187253
► Big data analysis and high dimensional data analysis are two popular and challenging topics in current statistical research. They bring us a lot of opportunities…
(more)
▼ Big data analysis and high dimensional data analysis are two popular and challenging topics in current statistical research. They bring us a lot of opportunities as well as many challenges. For big data, traditional methods are generally not efficient enough to handle them, from both time perspective and space perspective. For high dimensional data, most traditional methods can’t be implemented, let alone maintain their desirable properties, such as consistency.
In this disseration, three new strategies are proposed to solve these issues. HZSIS is a robust model-free variable screening method and possesses sure screening property under the ultrahigh-dimensional setting. It works based on the nonparanormal transformation and Henze-Zirkler’s test. The numerical results indicate that, compared to the existing methods, the proposed method is more robust to the data generated from heavy-tailed distributions and/or complex models with interaction variables.
Double Parallel Monte Carlo is a simple, practical and efficient MCMC algorithm for Bayesian analysis of big data. The proposed algorithm suggests to divide the big dataset into some smaller subsets and provides a simple method to aggregate the subset posteriors to approximate the full data posterior. To further speed up computation, the proposed algorithm employs the population stochastic approximation Monte Carlo (Pop-SAMC) algorithm, a parallel MCMC algorithm, to simulate from each subset posterior. Since the proposed algorithm consists of two levels of parallel, data parallel and simulation parallel, it is coined as “Double Parallel Monte Carlo”. The validity of the proposed algorithm is justified both mathematically and numerically.
Average Bayesian Information Criterion (ABIC) and its high-dimensional variant Average Extended Bayesian Information Criterion (AEBIC) led to an innovative way to use posterior samples to conduct model selection. The consistency of this method is established for the high-dimensional generalized linear model under some sparsity and regularity conditions. The numerical results also indicate that, when the sample size is large enough, this method can accurately select the smallest true model with high probability.
Advisors/Committee Members: Sinha, Samiran (advisor), Mallick, Bani (committee member), Bhattacharya, Anirban (committee member), Zhou, Jianxin (committee member).
Subjects/Keywords: Variable selection; variable screening; ultrahigh dimensional data analysis; big data; parallel computing; MCMC
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xue, J. (2017). Robust Model-free Variable Screening, Double-parallel Monte Carlo and Average Bayesian Information Criterion. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/187253
Chicago Manual of Style (16th Edition):
Xue, Jingnan. “Robust Model-free Variable Screening, Double-parallel Monte Carlo and Average Bayesian Information Criterion.” 2017. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/187253.
MLA Handbook (7th Edition):
Xue, Jingnan. “Robust Model-free Variable Screening, Double-parallel Monte Carlo and Average Bayesian Information Criterion.” 2017. Web. 07 Mar 2021.
Vancouver:
Xue J. Robust Model-free Variable Screening, Double-parallel Monte Carlo and Average Bayesian Information Criterion. [Internet] [Doctoral dissertation]. Texas A&M University; 2017. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/187253.
Council of Science Editors:
Xue J. Robust Model-free Variable Screening, Double-parallel Monte Carlo and Average Bayesian Information Criterion. [Doctoral Dissertation]. Texas A&M University; 2017. Available from: http://hdl.handle.net/1969.1/187253

Texas A&M University
13.
Zhang, Lin.
Application of Bayesian Hierarchical Models in Genetic Data Analysis.
Degree: PhD, Statistics, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/148056
► Genetic data analysis has been capturing a lot of attentions for understanding the mechanism of the development and progressing of diseases like cancers, and is…
(more)
▼ Genetic data analysis has been capturing a lot of attentions for understanding the mechanism of the development and progressing of diseases like cancers, and is crucial in discovering genetic markers and treatment targets in medical research. This dissertation focuses on several important issues in genetic data analysis, graphical network modeling, feature selection, and covariance estimation. First, we develop a gene network modeling method for discrete gene expression data, produced by technologies such as serial analysis of gene expression and RNA sequencing experiment, which generate counts of mRNA transcripts in cell samples. We propose a generalized linear model to fit the discrete gene expression data and assume that the log ratios of the mean expression levels follow a Gaussian distribution. We derive the gene network structures by selecting covariance matrices of the Gaussian distribution with a hyper-inverse Wishart prior. We incorporate prior network models based on Gene Ontology information, which avails existing biological information on the genes of interest. Next, we consider a variable selection problem, where the variables have natural grouping structures, with application to analysis of chromosomal copy number data. The chromosomal copy number data are produced by molecular inversion probes experiments which measure probe-specific copy number changes. We propose a novel Bayesian variable selection method, the hierarchical structured variable se- lection (HSVS) method, which accounts for the natural gene and probe-within-gene architecture to identify important genes and probes associated with clinically relevant outcomes. We propose the HSVS model for grouped variable selection, where simultaneous selection of both groups and within-group variables is of interest. The HSVS model utilizes a discrete mixture prior distribution for group selection and group-specific Bayesian lasso hierarchies for variable selection within groups. We further provide methods for accounting for serial correlations within groups that incorporate Bayesian fused lasso methods for within-group selection. Finally, we propose a Bayesian method of estimating high-dimensional covariance matrices that can be decomposed into a low rank and sparse component. This covariance structure has a wide range of applications including factor analytical model and random effects model. We model the covariance matrices with the decomposition structure by representing the covariance model in the form of a factor analytic model where the number of latent factors is unknown. We introduce binary indicators for estimating the rank of the low rank component combined with a Bayesian graphical lasso method for estimating the sparse component. We further extend our method to a graphical factor analytic model where the graphical model of the residuals is of interest. We achieve sparse estimation of the inverse covariance of the residuals in the graphical factor model by employing a hyper-inverse Wishart prior method for a decomposable graph and a Bayesian…
Advisors/Committee Members: Mallick, Bani K (advisor), Baladandayuthapani, Veera (advisor), Carroll, Raymond J (committee member), Adams, Garry (committee member).
Subjects/Keywords: covariance estimation; feature selection; graphical network modeling; genetic data analysis; Bayesian hierarchical model
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhang, L. (2012). Application of Bayesian Hierarchical Models in Genetic Data Analysis. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/148056
Chicago Manual of Style (16th Edition):
Zhang, Lin. “Application of Bayesian Hierarchical Models in Genetic Data Analysis.” 2012. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/148056.
MLA Handbook (7th Edition):
Zhang, Lin. “Application of Bayesian Hierarchical Models in Genetic Data Analysis.” 2012. Web. 07 Mar 2021.
Vancouver:
Zhang L. Application of Bayesian Hierarchical Models in Genetic Data Analysis. [Internet] [Doctoral dissertation]. Texas A&M University; 2012. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/148056.
Council of Science Editors:
Zhang L. Application of Bayesian Hierarchical Models in Genetic Data Analysis. [Doctoral Dissertation]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/148056

Texas A&M University
14.
Olalotiti-Lawal, Feyisayo.
Application of Fast Marching Methods for Rapid Reservoir Forecast and Uncertainty Quantification.
Degree: MS, Petroleum Engineering, 2013, Texas A&M University
URL: http://hdl.handle.net/1969.1/150964
► Rapid economic evaluations of investment alternatives in the oil and gas industry are typically contingent on fast and credible evaluations of reservoir models to make…
(more)
▼ Rapid economic evaluations of investment alternatives in the oil and gas industry are typically contingent on fast and credible evaluations of reservoir models to make future forecasts. It is often important to also quantify inherent risks and uncertainties in these evaluations. These ideally require several full-scale numerical simulations which is time consuming, impractical, if not impossible to do with conventional (Finite Difference) simulators in real life situations. In this research, the aim will be to improve on the efficiencies associated with these tasks. This involved exploring the applications of Fast Marching Methods (FMM) in both conventional and unconventional reservoir characterization problems.
In this work, we first applied the FMM for rapidly ranking multiple equi-probable geologic models. We demonstrated the suitability of drainage volume, efficiently calculated using FMM, as a surrogate parameter for field-wide cumulative oil production (FOPT). The probability distribution function (PDF) of the surrogate parameter was point-discretized to obtain 3 representative models for full simulations. Using the results from the simulations, the PDF of the reservoir performance parameter was constructed. Also, we investigated the applicability of a higher-order-moment-preserving approach which resulted in better uncertainty quantification over the traditional model selection methods.
Next we applied the FMM for a hydraulically fractured tight oil reservoir model calibration problem. We specifically applied the FMM geometric pressure approximation as a proxy for rapidly evaluating model proposals in a two-stage Markov Chain Monte Carlo (MCMC) algorithm. Here, we demonstrated the FMM-based proxy as a suitable proxy for evaluating model proposals. We obtained results showing a significant improvement in the efficiency compared to conventional single stage MCMC algorithm. Also in this work, we investigated the possibility of enhancing the computational efficiency for calculating the pressure field for both conventional and unconventional reservoirs using FMM. Good approximations of the steady state pressure distributions were obtained for homogeneous conventional waterflood systems. In unconventional system, we also recorded slight improvement in computational efficiency using FMM pressure approximations as initial guess in pressure solvers.
Advisors/Committee Members: Datta-Gupta, Akhil (advisor), King, Michael (committee member), Mallick, Bani (committee member).
Subjects/Keywords: Fast Marching Method; Geologic Model Ranking; Model Calibration; Two-Stage MCMC; Geometric Pressure Approximation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Olalotiti-Lawal, F. (2013). Application of Fast Marching Methods for Rapid Reservoir Forecast and Uncertainty Quantification. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/150964
Chicago Manual of Style (16th Edition):
Olalotiti-Lawal, Feyisayo. “Application of Fast Marching Methods for Rapid Reservoir Forecast and Uncertainty Quantification.” 2013. Masters Thesis, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/150964.
MLA Handbook (7th Edition):
Olalotiti-Lawal, Feyisayo. “Application of Fast Marching Methods for Rapid Reservoir Forecast and Uncertainty Quantification.” 2013. Web. 07 Mar 2021.
Vancouver:
Olalotiti-Lawal F. Application of Fast Marching Methods for Rapid Reservoir Forecast and Uncertainty Quantification. [Internet] [Masters thesis]. Texas A&M University; 2013. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/150964.
Council of Science Editors:
Olalotiti-Lawal F. Application of Fast Marching Methods for Rapid Reservoir Forecast and Uncertainty Quantification. [Masters Thesis]. Texas A&M University; 2013. Available from: http://hdl.handle.net/1969.1/150964

Texas A&M University
15.
Mandal, Soutrik.
Analysis and Goodness-of-Fit Tests for Time-to-Event Models.
Degree: PhD, Statistics, 2018, Texas A&M University
URL: http://hdl.handle.net/1969.1/174133
► The Cox proportional hazards model and the proportional odds model are some of the popular survival models often chosen to analyze censored time-to-event data. The…
(more)
▼ The Cox proportional hazards model and the proportional odds model are some of the popular survival
models often chosen to analyze censored time-to-event data. The properties of these models have been studied
in detail by several authors. In recent years, the linear transformation models have gained substantial
interest. Linear transformation models are a general class of models that contain the Cox proportional hazards
and proportional odds models as special cases. It thus provides a lot more flexibility in terms of model
selection. The linear transformation models have been studied in the comparatively simpler right censoring
scenario and some authors have analyzed the transformation models in the presence of measurement error.
In this dissertation, I consider the problem of analyzing the semiparametric transformation models in the
more general interval censoring setup when a covariate is measured with error. To the best of my knowledge
this is an unexplored combination. I propose a semiparametric methodology to estimate the parameters of
the linear transformation models. I use a flexible two-stage imputation technique to address the interval
censoring and covariate measurement error. Finite sample performance of the proposed method is judged
via simulation studies. Finally, the suggested method is applied to analyze a real dataset from an AIDS
clinical trial.
In the above discussion, I mentioned that the linear transformation models are a general class of models.
A natural question that arises then is which model to select. I propose a new class of omnibus supremum
tests based on martingale residuals for testing the goodness-of-fit of a specific model within the linear transformation
models when the observations are subject to right censoring. The performance of the proposed
test is judged via simulation studies. A guideline for extending this methodology to the interval censoring
scenario is also provided.
Advisors/Committee Members: Sinha, Samiran (advisor), Wang, Suojin (advisor), Mallick, Bani (committee member), Zoh, Roger (committee member).
Subjects/Keywords: interval censoring; linear transformation models; multiple imputation; semiparametric methods; martingale; goodness-of-fit tests
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mandal, S. (2018). Analysis and Goodness-of-Fit Tests for Time-to-Event Models. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/174133
Chicago Manual of Style (16th Edition):
Mandal, Soutrik. “Analysis and Goodness-of-Fit Tests for Time-to-Event Models.” 2018. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/174133.
MLA Handbook (7th Edition):
Mandal, Soutrik. “Analysis and Goodness-of-Fit Tests for Time-to-Event Models.” 2018. Web. 07 Mar 2021.
Vancouver:
Mandal S. Analysis and Goodness-of-Fit Tests for Time-to-Event Models. [Internet] [Doctoral dissertation]. Texas A&M University; 2018. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/174133.
Council of Science Editors:
Mandal S. Analysis and Goodness-of-Fit Tests for Time-to-Event Models. [Doctoral Dissertation]. Texas A&M University; 2018. Available from: http://hdl.handle.net/1969.1/174133

Texas A&M University
16.
Xun, Xiaolei.
Statistical Inference in Inverse Problems.
Degree: PhD, Statistics, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10874
► Inverse problems have gained popularity in statistical research recently. This dissertation consists of two statistical inverse problems: a Bayesian approach to detection of small low…
(more)
▼ Inverse problems have gained popularity in statistical research recently. This dissertation consists of two statistical inverse problems: a Bayesian approach to detection of small low emission sources on a large random background, and parameter estimation methods for partial differential equation (PDE) models.
Source detection problem arises, for instance, in some homeland security applications. We address the problem of detecting presence and location of a small low emission source inside an object, when the background noise dominates. The goal is to reach the signal-to-noise ratio levels on the order of 10^-3. We develop a Bayesian approach to this problem in two-dimension. The method allows inference not only about the existence of the source, but also about its location. We derive Bayes factors for model selection and estimation of location based on Markov chain Monte Carlo simulation. A simulation study shows that with sufficiently high total emission level, our method can effectively locate the source.
Differential equation (DE) models are widely used to model dynamic processes in many fields. The forward problem of solving equations for given parameters that define the DEs has been extensively studied in the past. However, the inverse problem of estimating parameters based on observed state variables is relatively sparse in the statistical literature, and this is especially the case for PDE models. We propose two joint modeling schemes to solve for constant parameters in PDEs: a parameter cascading method and a Bayesian treatment. In both methods, the unknown functions are expressed via basis function expansion. For the parameter cascading method, we develop the algorithm to estimate the parameters and derive a sandwich estimator of the covariance matrix. For the Bayesian method, we develop the joint model for data and the PDE, and describe how the Markov chain Monte Carlo technique is employed to make posterior inference. A straightforward two-stage method is to first fit the data and then to estimate parameters by the least square principle. The three approaches are illustrated using simulated examples and compared via simulation studies. Simulation results show that the proposed methods outperform the two-stage method.
Advisors/Committee Members: Carroll, Raymond J. (advisor), Mallick, Bani K. (advisor), Sang, Huiyan (committee member), Kuchment, Peter (committee member).
Subjects/Keywords: Inverse problems; Bayesian method; Source detection; Parameter estimation; Parameter cascading; Partial differential equations.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xun, X. (2012). Statistical Inference in Inverse Problems. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10874
Chicago Manual of Style (16th Edition):
Xun, Xiaolei. “Statistical Inference in Inverse Problems.” 2012. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10874.
MLA Handbook (7th Edition):
Xun, Xiaolei. “Statistical Inference in Inverse Problems.” 2012. Web. 07 Mar 2021.
Vancouver:
Xun X. Statistical Inference in Inverse Problems. [Internet] [Doctoral dissertation]. Texas A&M University; 2012. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10874.
Council of Science Editors:
Xun X. Statistical Inference in Inverse Problems. [Doctoral Dissertation]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10874

Texas A&M University
17.
Tao, Qing.
A Comparison of Waterflood Management Using Arrival Time Optimization and NPV Optimization.
Degree: MS, Petroleum Engineering, 2011, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2009-12-7238
► Waterflooding is currently the most commonly used method to improve oil recovery after primary depletion. The reservoir heterogeneity such as permeability distribution could negatively affect…
(more)
▼ Waterflooding is currently the most commonly used method to improve oil recovery
after primary depletion. The reservoir heterogeneity such as permeability distribution
could negatively affect the performance of waterflooding. The presence of high
permeability streaks could lead to an early water breakthrough at the producers and thus
reduce the sweep efficiency in the field. One approach to counteract the impact of
heterogeneity and to improve waterflood sweep efficiency is through optimal rate
allocation to the injectors and producers. Through optimal rate control, we can manage
the propagation of the flood front, delay water breakthrough at the producers and also
increase the sweep and hence, the recovery efficiency. The arrival time optimization
method uses a streamline-based method to calculate water arrival time sensitivities with
respect to production and injection rates. It can also optimize sweep efficiency on
multiple realizations to account for geological uncertainty. To extend the scope of this
optimization method for more general conditions, this work utilized a finite difference
simulator and streamline tracing software to conduct the optimization.
Apart from sweep efficiency, another most widely used optimization method is
to maximize the net present value (NPV) within a given time period. Previous efforts on
optimization of waterflooding used optimal control theorem to allocate
injection/production rates for fixed well configurations. The streamline-based approach
gives the optimization result in a much more computationally efficient manner.
In the present study, we compare the arrival time optimization and NPV
optimization results to show their strengths and limitations. The NPV optimization uses
a perturbation method to calculate the gradients. The comparison is conducted on a 4-
spot synthetic case. Then we introduce the accelerated arrival time optimization which
has an acceleration term in the objective function to speed up the oil production in the
field. The proposed new approach has the advantage of considering both the sweep
efficiency and net present value in the field.
Advisors/Committee Members: Datta-Gupta, Akhil (advisor), Jafarpour, Behnam (committee member), Mallick, Bani (committee member).
Subjects/Keywords: waterflood management; arrival time optimization; NPV optimization; rate control; sweep efficiency
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tao, Q. (2011). A Comparison of Waterflood Management Using Arrival Time Optimization and NPV Optimization. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2009-12-7238
Chicago Manual of Style (16th Edition):
Tao, Qing. “A Comparison of Waterflood Management Using Arrival Time Optimization and NPV Optimization.” 2011. Masters Thesis, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2009-12-7238.
MLA Handbook (7th Edition):
Tao, Qing. “A Comparison of Waterflood Management Using Arrival Time Optimization and NPV Optimization.” 2011. Web. 07 Mar 2021.
Vancouver:
Tao Q. A Comparison of Waterflood Management Using Arrival Time Optimization and NPV Optimization. [Internet] [Masters thesis]. Texas A&M University; 2011. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2009-12-7238.
Council of Science Editors:
Tao Q. A Comparison of Waterflood Management Using Arrival Time Optimization and NPV Optimization. [Masters Thesis]. Texas A&M University; 2011. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2009-12-7238

Texas A&M University
18.
Liu, Senmao.
A Comprehensive Approach for Sparse Principle Component Analysis using Regularized Singular Value Decomposition.
Degree: PhD, Statistics, 2016, Texas A&M University
URL: http://hdl.handle.net/1969.1/192022
► Principle component analysis (PCA) has been a widely used tool for statistics and data analysis for many years. A good result of PCA should be…
(more)
▼ Principle component analysis (PCA) has been a widely used tool for statistics and data analysis for many years. A good result of PCA should be both interpretable and accurate. However, neither interpretability nor accuracy could be achieved well in “big data” scenarios where there are large numbers of original variables. Therefore people developed sparse PCA, in which obtained principle components (PCs) are linear combinations of a limited number of original variables, which yields good interpretability. In addition, some theoretical results showed that, when the genuine model is sparse, PCs obtained via sparse PCA instead of traditional PCA are consistent estimators. These aspects have made sparse PCA a hot research topic in recent years.
In this dissertation, we developed a comprehensive and systematic way for doing sparse PCA by using an SVD-based approach. In detail, we proposed the formulation and algorithm and showed its consistency and convergence. We even showed convergence to global optima using a limited number of trials, which is a breakthrough in sparse PCA area. In addition, to guarantee orthogonality or uncorrelatedness when multiple PCs are extracted, we developed a method for sparse PCA with orthogonal constraint, proposed its algorithm, and showed the convergence. In addition, to deal with missing values in the design matrix which often happens in reality, we developed a method for sparse PCA with missing values, proposed its algorithm, and showed the convergence. Moreover, to provide a good way of selecting tuning parameter in these formulations, we designed an entry-wise cross validation method based on sparse PCA with missing values. All these contributions and breakthroughs make our results practically useful and theoretically complete. Simulation study and real world data analysis are also provided, which showed that our method has competing results with others in “without missing” cases, and good results in “with missing” cases in which currently we are the only practical method.
Advisors/Committee Members: Huang, Jianhua (advisor), Johnson, Valen (committee member), Mallick, Bani (committee member), Ding, Yu (committee member).
Subjects/Keywords: Principal Component Analysis; Sparse PCA; Singular Value Decomposition; Regularized SVD; Alternating Direction; Block Coordinate Descent; Regularity; Power Iteration; Global Optima; Orthogonal Constraint; Missing Values; Cross-Validation.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, S. (2016). A Comprehensive Approach for Sparse Principle Component Analysis using Regularized Singular Value Decomposition. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/192022
Chicago Manual of Style (16th Edition):
Liu, Senmao. “A Comprehensive Approach for Sparse Principle Component Analysis using Regularized Singular Value Decomposition.” 2016. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/192022.
MLA Handbook (7th Edition):
Liu, Senmao. “A Comprehensive Approach for Sparse Principle Component Analysis using Regularized Singular Value Decomposition.” 2016. Web. 07 Mar 2021.
Vancouver:
Liu S. A Comprehensive Approach for Sparse Principle Component Analysis using Regularized Singular Value Decomposition. [Internet] [Doctoral dissertation]. Texas A&M University; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/192022.
Council of Science Editors:
Liu S. A Comprehensive Approach for Sparse Principle Component Analysis using Regularized Singular Value Decomposition. [Doctoral Dissertation]. Texas A&M University; 2016. Available from: http://hdl.handle.net/1969.1/192022

Texas A&M University
19.
Larsen, Allyson Elaine.
Approximation Schemes to Simplify Posterior Computation.
Degree: PhD, Statistics, 2020, Texas A&M University
URL: http://hdl.handle.net/1969.1/192454
► Markov chain Monte Carlo (MCMC) sampling methods often do not scale well to large datasets, so there has been an increased interest in approximate Markov…
(more)
▼ Markov chain Monte Carlo (MCMC) sampling methods often do not scale well to large datasets, so there has been an increased interest in approximate Markov chain Monte Carlo (aMCMC) sampling methods. We propose two different aMCMC methods. For the first method, we propose a new distribution, called the soft tMVN distribution, which provides a smooth approximation to the truncated multivariate normal (tMVN) distribution with linear constraints. The soft tMVN distribution can be used to approximate simulations from a multivariate truncated normal distribution with linear constraints, or itself as a prior in shape-constrained problems. We provide theoretical support to the approximation capability of the soft tMVN and provide further empirical evidence thereof. We then develop an aMCMC method for Bayesian monotone single-index modeling. We replace the usual tMVN prior with the soft tMVN prior and show that using the soft tMVN prior gives similar statistical performance while the run-time is significantly faster.
The second aMCMC method is a multivariate convex regression method. In it, we approximate the max of affine functions with the softmax of affine functions. Convex regression methods that use the max of affine functions appear to do well in traditional frequentist settings, but does not scale well to large data in Bayesian settings. We propose the softmax-affine convex (SMA) regression method which replaces the max with the softmax function. The softmax function is a smooth function that approximates the max of affine functions. This allows gradients to be computed, which makes the Hamiltonian Monte Carlo (HMC) algorithm a natural choice for sampling from the posterior. We specify the priors for SMA and use Stan, a default HMC algorithm, to sample from the posterior. We provide empirical evidence that SMA regression is comparable to existing convex regression methods. We also provide a method for choosing the number of affine functions in the softmax function.
Advisors/Committee Members: Bhattacharya, Anirban (advisor), Gaynanova, Irina (committee member), Mallick, Bani (committee member), Qian, Xiaoning (committee member).
Subjects/Keywords: Approximate; Markov chain Monte Carlo
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Larsen, A. E. (2020). Approximation Schemes to Simplify Posterior Computation. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/192454
Chicago Manual of Style (16th Edition):
Larsen, Allyson Elaine. “Approximation Schemes to Simplify Posterior Computation.” 2020. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/192454.
MLA Handbook (7th Edition):
Larsen, Allyson Elaine. “Approximation Schemes to Simplify Posterior Computation.” 2020. Web. 07 Mar 2021.
Vancouver:
Larsen AE. Approximation Schemes to Simplify Posterior Computation. [Internet] [Doctoral dissertation]. Texas A&M University; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/192454.
Council of Science Editors:
Larsen AE. Approximation Schemes to Simplify Posterior Computation. [Doctoral Dissertation]. Texas A&M University; 2020. Available from: http://hdl.handle.net/1969.1/192454

Texas A&M University
20.
Stripling, Hayes Franklin.
Adjoint-Based Uncertainty Quantification and Sensitivity Analysis for Reactor Depletion Calculations.
Degree: PhD, Nuclear Engineering, 2013, Texas A&M University
URL: http://hdl.handle.net/1969.1/151312
► Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In…
(more)
▼ Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error.
We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.
Advisors/Committee Members: Adams, Marvin L. (advisor), Mallick, Bani K. (committee member), McClarren, Ryan G. (committee member), Morel, Jim E. (committee member), Anitescu, Mihai (committee member).
Subjects/Keywords: Adjoint; Sensitivity Analysis; Uncertainty Quantification; Depletion Calculations
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Stripling, H. F. (2013). Adjoint-Based Uncertainty Quantification and Sensitivity Analysis for Reactor Depletion Calculations. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/151312
Chicago Manual of Style (16th Edition):
Stripling, Hayes Franklin. “Adjoint-Based Uncertainty Quantification and Sensitivity Analysis for Reactor Depletion Calculations.” 2013. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/151312.
MLA Handbook (7th Edition):
Stripling, Hayes Franklin. “Adjoint-Based Uncertainty Quantification and Sensitivity Analysis for Reactor Depletion Calculations.” 2013. Web. 07 Mar 2021.
Vancouver:
Stripling HF. Adjoint-Based Uncertainty Quantification and Sensitivity Analysis for Reactor Depletion Calculations. [Internet] [Doctoral dissertation]. Texas A&M University; 2013. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/151312.
Council of Science Editors:
Stripling HF. Adjoint-Based Uncertainty Quantification and Sensitivity Analysis for Reactor Depletion Calculations. [Doctoral Dissertation]. Texas A&M University; 2013. Available from: http://hdl.handle.net/1969.1/151312

Texas A&M University
21.
Wei, Rubin.
Highly Nonlinear Measurement Error Models in Nutritional Epidemiology.
Degree: PhD, Statistics, 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/161236
► This dissertation consists of two main projects in the area of measurement error models with application in nutritional epidemiology. The first project studies the application…
(more)
▼ This dissertation consists of two main projects in the area of measurement error models with application in nutritional epidemiology.
The first project studies the application of moment reconstruction and moment- adjusted imputation in the context of nonlinear Berkson-type measurement error. The idea of moment reconstruction and moment adjusted imputation, like regression calibration, is to replace the unobserved variable of interest which is subject to measurement error with a proxy, which can be used in a variety of subsequent analyses, without redoing the measurement error model each time a different downstream analysis is performed. However, both methods essentially require the homoscedastic
classical measurement error model or non-classical model that can be easily reduced to a classical one. In the first project, we deal with a case where the measurement error structure is of nonlinear Berkson-type, and develop analogues of moment reconstruction and moment-adjusted imputation for this case. We use National Institutes of Health-AARP Diet and Health Study, where the latent variable is a dietary pattern score called the Healthy Eating Index-2005, and simulations to illustrate the methods. The numerical results show the promise of these methods in the nonlinear Berkson-type measurement error context.
In the second project, we consider measurement error models for two variables observed repeatedly and subject to measurement error. One variable is continuous but positive, while the other variable is a mixture of continuous and zero measurements. This second variable has two sources of zeros. The first source is episodic zeros, wherein some of the measurements for an individual may be zero and others positive. The second source is hard zeros, i.e., some individuals will always report
zero. An example is the consumption of alcohol from alcoholic beverages: some individuals consume alcoholic beverages episodically, while others never consume alcoholic beverages. However, with a small number of repeated measurements from individuals, it is not possible to determine those that are episodic zeros and those that are hard zeros. We develop a new measurement error model for this problem, and use Bayesian methods to t it. We also contrast our approach for a single variable which is subject to excess zeros, with those methods that have been developed for a single variable and proven to be somewhat numerically unstable. Simulations
and data analyses of two studies are used to show that the new method gives more realistic and numerically stable results than the maximum likelihood approach.
Advisors/Committee Members: Carroll, Raymond J. (advisor), Longnecker, Michael T. (committee member), Turner, Nancy D. (committee member), Mallick, Bani K. (committee member).
Subjects/Keywords: Measurement error; Berkson-type error; Latent variable models; Moment reconstruction; Bayesian methods; Hard zeroes; Zero-inflation; Mixed models; Nutritional epidemiology; Usual intake; Never-consumers
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wei, R. (2014). Highly Nonlinear Measurement Error Models in Nutritional Epidemiology. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/161236
Chicago Manual of Style (16th Edition):
Wei, Rubin. “Highly Nonlinear Measurement Error Models in Nutritional Epidemiology.” 2014. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/161236.
MLA Handbook (7th Edition):
Wei, Rubin. “Highly Nonlinear Measurement Error Models in Nutritional Epidemiology.” 2014. Web. 07 Mar 2021.
Vancouver:
Wei R. Highly Nonlinear Measurement Error Models in Nutritional Epidemiology. [Internet] [Doctoral dissertation]. Texas A&M University; 2014. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/161236.
Council of Science Editors:
Wei R. Highly Nonlinear Measurement Error Models in Nutritional Epidemiology. [Doctoral Dissertation]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/161236

Texas A&M University
22.
Olalotiti-Lawal, Feyisayo Omoniyi.
Effective Reservoir Management for Carbon Utilization and Storage Applications.
Degree: PhD, Petroleum Engineering, 2018, Texas A&M University
URL: http://hdl.handle.net/1969.1/173427
► It is believed that the observed rapid rise in global temperatures is caused by high atmospheric concentration of CO2, due to emissions from fossil fuel…
(more)
▼ It is believed that the observed rapid rise in global temperatures is caused by high atmospheric concentration of CO2, due to emissions from fossil fuel combustion. While global efforts are currently in place to mitigate the effect, it is expected that hydrocarbons will remain the main source of energy supply for the planet in the foreseeable future. Harmonizing these seemingly conflicting objectives has given rise to the concept of Carbon Capture Utilization and Storage (CCUS).
A prominent form of CCUS involves the capture and injection of anthropogenic CO2 for Enhanced Oil Recovery (EOR). During CO2 EOR, substantial amount of injected CO2 is retained and permanently stored in the subsurface. However, due to inherent geological and thermodynamic complexities in subsurface environments, most CCUS projects are plagued with poor sweep efficiencies. For successful CCUS implementation, advanced reservoir management strategies which appropriately capture relevant physics are therefore required. In this regard, effective techniques in three fundamental areas of reservoir management including forward modeling, inverse modeling and field development optimization methods are presented herein. In each area, we demonstrate the validity and utility of our methodologies for CCUS applications with field examples.
First, a comprehensive streamline-based simulation of CO2 in saline aquifers is proposed. Here, the unique strength of streamlines at resolving sub-grid resolution which enables a high-resolution representation of CO2 transport during injection is exploited. Relevant physics such as compressibility and formation dry-out effects which were
ignored in previously proposed streamline models are accounted for. The methodology is illustrated with a series of synthetic models and applied to the Johansen field in North Sea. All streamline-based models are benchmarked with commercial compositional simulation response with good agreement.
Second, a Multiresolution Grid Connectivity-based Transform (
M-GCT) for effective subsurface model calibration is proposed.
M-GCT allows the representation and update of grid property fields with improved spatial resolutions. This enables improved characterization of the subsurface, especially for CCUS systems in which CO2 transport is highly sensitive to contrasts in hydraulic conductivity. The approach is illustrated with a synthetic and a field scale problem. To demonstrate its utility, the proposed method is applied to a field actively supporting a post-combustion CCUS project.
Finally, a streamline-based rate optimization of intelligent wells used in CCUS projects is proposed. Based on a previously developed method, a combination of the incremental oil recovery, CO2 storage efficiency and CO2 utilization factor are optimized through an optimal rate schedules of the installed ICVs. The approach is particularly efficient since required objective function gradients and hessians are computed analytically from streamline-derived sensitivities obtained from a single simulation run. This…
Advisors/Committee Members: Datta-Gupta, Akhil (advisor), King, Michael J (committee member), Gildin, Eduardo (committee member), Mallick, Bani (committee member).
Subjects/Keywords: CCUS; CO2 EOR; CO2 Storage; Subsurface Model Reparameterization; Production Optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Olalotiti-Lawal, F. O. (2018). Effective Reservoir Management for Carbon Utilization and Storage Applications. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/173427
Chicago Manual of Style (16th Edition):
Olalotiti-Lawal, Feyisayo Omoniyi. “Effective Reservoir Management for Carbon Utilization and Storage Applications.” 2018. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/173427.
MLA Handbook (7th Edition):
Olalotiti-Lawal, Feyisayo Omoniyi. “Effective Reservoir Management for Carbon Utilization and Storage Applications.” 2018. Web. 07 Mar 2021.
Vancouver:
Olalotiti-Lawal FO. Effective Reservoir Management for Carbon Utilization and Storage Applications. [Internet] [Doctoral dissertation]. Texas A&M University; 2018. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/173427.
Council of Science Editors:
Olalotiti-Lawal FO. Effective Reservoir Management for Carbon Utilization and Storage Applications. [Doctoral Dissertation]. Texas A&M University; 2018. Available from: http://hdl.handle.net/1969.1/173427

Texas A&M University
23.
Sarkar, Abhra.
Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors.
Degree: PhD, Statistics, 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/153327
► Although the literature on measurement error problems is quite extensive, solutions to even the most fundamental measurement error problems like density deconvolution and regression with…
(more)
▼ Although the literature on measurement error problems is quite extensive, solutions to even the most fundamental measurement error problems like density deconvolution and regression with errors-in-covariates are available only under numerous simplifying and unrealistic assumptions. This dissertation demonstrates that Bayesian methods, by accommodating measurement errors through natural hierarchies, can provide a very powerful framework for solving these important measurement
errors problems under more realistic scenarios. However, the very presence of measurement errors often renders techniques that are successful in measurement error free scenarios inefficient, numerically unstable, computationally challenging or intractable. Additionally, measurement error problems often have unique features that compound modeling and computational challenges.
In this dissertation, we develop novel Bayesian semiparametric approaches that cater to these unique challenges of measurement error problems and allow us to break free from many restrictive parametric assumptions of previously existing approaches. In this dissertation, we first consider the problem of univariate density deconvolution when replicated proxies are available for each unknown value of the variable of interest. Existing deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. We relax these assumptions and develop robust and efficient deconvolution approaches based on Dirichlet process mixture models and mixtures of B-splines in the presence of conditionally heteroscedastic measurement errors. We then extend the methodology to nonlinear univariate regression with errors-in-covariates problems when the densities of the covariate, the regression errors and the measurement errors are all unknown, and the regression and the measurement errors are conditionally heteroscedastic. The final section of this dissertation is devoted to the development
of flexible multivariate density deconvolution approaches. The methods available in the existing sparse literature all assume the measurement error density to be fully specified. In contrast, we develop multivariate deconvolution approaches for scenarios when the measurement error density is unknown but replicated proxies are available for each subject. We consider scenarios when the measurement errors are distributed independently from the vector valued variable of interest as well as scenarios when they are conditionally heteroscedastic. To meet the significantly harder
modeling and computational challenges of the multivariate problem, we exploit properties of finite mixture models, multivariate normal kernels, latent factor models and exchangeable priors in many novel ways.
We provide theoretical results showing the flexibility of the proposed models. In simulation experiments, the proposed semiparametric methods vastly outperform previously…
Advisors/Committee Members: Mallick, Bani K. (advisor), Carroll, Raymond J. (advisor), Bhattacharya, Anirban (committee member), Yoon, Byung-Jun (committee member).
Subjects/Keywords: B-splines; Conditional heteroscedasticity; Density deconvolution; Dirichlet process; Latent factor analyzers; Measurement errors; Mixture models; Nutritional epidemiology; Regression with errors in covariates; Sparsity inducing priors
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sarkar, A. (2014). Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/153327
Chicago Manual of Style (16th Edition):
Sarkar, Abhra. “Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors.” 2014. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/153327.
MLA Handbook (7th Edition):
Sarkar, Abhra. “Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors.” 2014. Web. 07 Mar 2021.
Vancouver:
Sarkar A. Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors. [Internet] [Doctoral dissertation]. Texas A&M University; 2014. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/153327.
Council of Science Editors:
Sarkar A. Bayesian Semiparametric Density Deconvolution and Regression in the Presence of Measurement Errors. [Doctoral Dissertation]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/153327

Texas A&M University
24.
Goddard, Scott D.
Restricted Most Powerful Bayesian Tests.
Degree: PhD, Statistics, 2015, Texas A&M University
URL: http://hdl.handle.net/1969.1/155108
► Uniformly most powerful Bayesian tests (UMPBTs) are defined to be Bayesian tests that maximize the probability that the Bayes factor against a fixed null hypothesis…
(more)
▼ Uniformly most powerful Bayesian tests (UMPBTs) are defined to be Bayesian tests that maximize the probability that the Bayes factor against a fixed null hypothesis exceeds a specified evidence threshold. Unfortunately, UMPBTs exist only in a relatively limited number of testing scenarios, and in particular they cannot be defined for most tests involving linear models. In this dissertation, I generalize the notion of UMPBTs by restricting the class of alternative hypotheses that are considered in the test of a given null hypothesis. I call the resulting class of Bayesian hypothesis tests restricted most powerful Bayesian tests (RMPBTs). I then derive RMPBTs for linear models by restricting the class of possible alternative hypotheses to g-priors.
An important feature of the resulting class of tests is that their rejection regions coincide with the rejection regions of usual frequentist F-tests, provided that the evidence thresholds for the Bayesian tests are appropriately matched to the size of the classical tests. This correspondence leads to the definition of default Bayes factors for many common tests of linear hypotheses. I illustrate the use of RMPBTs in the special cases of ANOVA and one- and two-sample t-tests. I then use RMPBTs to develop a novel Bayesian variable selection method and compare its performance to other Bayesian tests based on g-priors in a sequence of numerical examples. Finally, a software package for R is detailed which implements the RMPBTs described herein as well as many of the UMPBTs that have been developed.
Advisors/Committee Members: Johnson, Valen E (advisor), Mallick, Bani K (committee member), Goldsmith, Pat R (committee member), Carroll, Raymond J (committee member).
Subjects/Keywords: Hypothesis tests; g prior; UMPBT; Bayesian variable selection
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Goddard, S. D. (2015). Restricted Most Powerful Bayesian Tests. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/155108
Chicago Manual of Style (16th Edition):
Goddard, Scott D. “Restricted Most Powerful Bayesian Tests.” 2015. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/155108.
MLA Handbook (7th Edition):
Goddard, Scott D. “Restricted Most Powerful Bayesian Tests.” 2015. Web. 07 Mar 2021.
Vancouver:
Goddard SD. Restricted Most Powerful Bayesian Tests. [Internet] [Doctoral dissertation]. Texas A&M University; 2015. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/155108.
Council of Science Editors:
Goddard SD. Restricted Most Powerful Bayesian Tests. [Doctoral Dissertation]. Texas A&M University; 2015. Available from: http://hdl.handle.net/1969.1/155108

Texas A&M University
25.
Hetzler, Adam C.
Quantification of Uncertainties Due to Opacities in a Laser-Driven Radiative-Shock Problem.
Degree: PhD, Nuclear Engineering, 2013, Texas A&M University
URL: http://hdl.handle.net/1969.1/149343
► This research presents new physics-based methods to estimate predictive uncertainty stemming from uncertainty in the material opacities in radiative transfer computations of key quantities of…
(more)
▼ This research presents new physics-based methods to estimate predictive uncertainty stemming from uncertainty in the material opacities in radiative transfer computations of key quantities of interest (QOIs). New methods are needed because it is infeasible to apply standard uncertainty-propagation techniques to the O(105) uncertain opacities in a realistic simulation. The new approach toward uncertainty quantification applies the uncertainty analysis to the physical parameters in the underlying model used to calculate the opacities. This set of uncertain parameters is much smaller (O(102)) than the number of opacities. To further reduce the dimension of the set of parameters to be rigorously explored, we use additional screening applied at two different levels of the calculational hierarchy: first, physics-based screening eliminates the physical parameters that are unimportant from underlying physics models a priori; then, sensitivity analysis in simplified versions of the complex problem of interest screens out parameters that are not important to the QOIs. We employ a Bayesian Multivariate Adaptive Regression Spline (BMARS) emulator for this sensitivity analysis. The high dimension of the input space and large number of samples test the efficacy of these methods on larger problems. Ultimately, we want to perform uncertainty quantification on the large, complex problem with the reduced set of parameters. Results of this research demonstrate that the QOIs for target problems agree at for different parameter screening criteria and varying sample sizes. Since the QOIs agree, we have gained confidence in our results using the multiple screening criteria and sample sizes.
Advisors/Committee Members: Adams, Marvin L (advisor), Mallick, Bani K (committee member), McClarren, Ryan G (committee member), Morel, Jim E (committee member).
Subjects/Keywords: Uncertainty Quantification; Sensitivity Analysis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hetzler, A. C. (2013). Quantification of Uncertainties Due to Opacities in a Laser-Driven Radiative-Shock Problem. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/149343
Chicago Manual of Style (16th Edition):
Hetzler, Adam C. “Quantification of Uncertainties Due to Opacities in a Laser-Driven Radiative-Shock Problem.” 2013. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/149343.
MLA Handbook (7th Edition):
Hetzler, Adam C. “Quantification of Uncertainties Due to Opacities in a Laser-Driven Radiative-Shock Problem.” 2013. Web. 07 Mar 2021.
Vancouver:
Hetzler AC. Quantification of Uncertainties Due to Opacities in a Laser-Driven Radiative-Shock Problem. [Internet] [Doctoral dissertation]. Texas A&M University; 2013. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/149343.
Council of Science Editors:
Hetzler AC. Quantification of Uncertainties Due to Opacities in a Laser-Driven Radiative-Shock Problem. [Doctoral Dissertation]. Texas A&M University; 2013. Available from: http://hdl.handle.net/1969.1/149343

Texas A&M University
26.
Alahmadi, Hasan Ali H.
A Model for Optimizing Energy Investments and Policy Under Uncertainty with Application to Saudi Arabia.
Degree: PhD, Petroleum Engineering, 2016, Texas A&M University
URL: http://hdl.handle.net/1969.1/157993
► An energy producer must determine optimal energy investment strategies in order to maximize the value of its energy portfolio. Determining optimal investment strategies is challenging.…
(more)
▼ An energy producer must determine optimal energy investment strategies in order to maximize the value of its energy portfolio. Determining optimal investment strategies is challenging. One of the main challenges is the large uncertainty in many of the parameters involved in the optimization process. Existing large-scale energy models are mostly deterministic and thus have limited capability for assessing uncertainty. Modelers usually use scenario analysis to address model input uncertainty.
In this research, I developed a probabilistic model for optimizing energy investments and policies from an energy producer’s perspective. The model uses a top-down approach to probabilistically forecast primary energy demand. Distributions rather than static values are used to model uncertainty in the input variables. The model can be applied to a country-level energy system. It maximizes the portfolio expected net present value (ENPV) while ensuring energy sustainability. The model was built in MSExcel® using the @RISK Palisade add-in, which is capable of modeling uncertain parameters and performing stochastic simulation optimization.
The model was applied to Saudi Arabia to determine its optimum energy investment strategy, determine the value of investing in alternative energy sources, and compare deterministic and probabilistic modeling approaches. The model, given its assumptions and limitations, suggests that Saudi Arabia should keep its oil production capacity at 12.5 million barrels per day, especially in the short term. It also suggests that most of the future power-generation (electricity) demand in Saudi Arabia should be met using alternative-energy sources (nuclear, solar, and wind). Otherwise, large gas production is required to meet such demand. In addition, comparing probabilistic to deterministic model results shows that deterministic models may overestimate total portfolio ENPV and underestimate future investments needed to meet projected power demand.
A primary contribution of this work is rigorously addressing uncertainty quantification in energy modeling. Building probabilistic energy models is one of the challenges facing the industry today. The model is also the first, to the best of my knowledge, that attempts to optimize Saudi Arabia’s energy portfolio using a probabilistic approach and addressing the value of investing in alternative energy sources.
Advisors/Committee Members: McVay, Duane A (advisor), Gildin, Eduardo (committee member), Voneiff, George W (committee member), Mallick, Bani K (committee member).
Subjects/Keywords: Energy Optimization; Uncertainty Quantification; Probabilistic Energy Modeling; Energy Economics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Alahmadi, H. A. H. (2016). A Model for Optimizing Energy Investments and Policy Under Uncertainty with Application to Saudi Arabia. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/157993
Chicago Manual of Style (16th Edition):
Alahmadi, Hasan Ali H. “A Model for Optimizing Energy Investments and Policy Under Uncertainty with Application to Saudi Arabia.” 2016. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/157993.
MLA Handbook (7th Edition):
Alahmadi, Hasan Ali H. “A Model for Optimizing Energy Investments and Policy Under Uncertainty with Application to Saudi Arabia.” 2016. Web. 07 Mar 2021.
Vancouver:
Alahmadi HAH. A Model for Optimizing Energy Investments and Policy Under Uncertainty with Application to Saudi Arabia. [Internet] [Doctoral dissertation]. Texas A&M University; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/157993.
Council of Science Editors:
Alahmadi HAH. A Model for Optimizing Energy Investments and Policy Under Uncertainty with Application to Saudi Arabia. [Doctoral Dissertation]. Texas A&M University; 2016. Available from: http://hdl.handle.net/1969.1/157993

Texas A&M University
27.
Talluri, Rajesh.
Bayesian Gaussian Graphical models using sparse selection priors and their mixtures.
Degree: PhD, Statistics, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9828
► We propose Bayesian methods for estimating the precision matrix in Gaussian graphical models. The methods lead to sparse and adaptively shrunk estimators of the precision…
(more)
▼ We propose Bayesian methods for estimating the precision matrix in Gaussian graphical models. The methods lead to sparse and adaptively shrunk estimators of the precision matrix, and thus conduct model selection and estimation simultaneously. Our methods are based on selection and shrinkage priors leading to parsimonious parameterization of the precision (inverse covariance) matrix, which is essential in several applications in learning relationships among the variables. In Chapter I, we employ the Laplace prior on the off-diagonal element of the precision matrix, which is similar to the lasso model in a regression context. This type of prior encourages sparsity while providing shrinkage estimates. Secondly we introduce a novel type of selection prior that develops a sparse structure of the precision matrix by making most of the elements exactly zero, ensuring positive-definiteness.
In Chapter II we extend the above methods to perform classification. Reverse-phase protein array (RPPA) analysis is a powerful, relatively new platform that allows for high-throughput, quantitative analysis of protein networks. One of the challenges that currently limits the potential of this technology is the lack of methods that allows for accurate data modeling and identification of related networks and samples. Such models may improve the accuracy of biological sample classification based on patterns of protein network activation, and provide insight into the distinct biological relationships underlying different cancers. We propose a Bayesian sparse graphical modeling approach motivated by RPPA data using selection priors on the conditional relationships in the presence of class information. We apply our methodology to an RPPA data set generated from panels of human breast cancer and ovarian cancer cell lines. We demonstrate that the model is able to distinguish the different cancer cell types more accurately than several existing models and to identify differential regulation of components of a critical signaling network (the PI3K-AKT pathway) between these cancers. This approach represents a powerful new tool that can be used to improve our understanding of protein networks in cancer.
In Chapter III we extend these methods to mixtures of Gaussian graphical models for clustered data, with each mixture component being assumed Gaussian with an adaptive covariance structure. We model the data using Dirichlet processes and finite mixture models and discuss appropriate posterior simulation schemes to implement posterior inference in the proposed models, including the evaluation of normalizing constants that are functions of parameters of interest which are a result of the restrictions on the correlation matrix. We evaluate the operating characteristics of our method via simulations, as well as discuss examples based on several real data sets.
Advisors/Committee Members: Mallick, Bani K. (advisor), Baladandayuthapani, Veerabhadran (committee member), Hart, Jeffrey D. (committee member), Datta, Aniruddha (committee member).
Subjects/Keywords: Bayesian; Gaussian Graphical Models; Covariance Selection; Mixture Models
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Talluri, R. (2012). Bayesian Gaussian Graphical models using sparse selection priors and their mixtures. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9828
Chicago Manual of Style (16th Edition):
Talluri, Rajesh. “Bayesian Gaussian Graphical models using sparse selection priors and their mixtures.” 2012. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9828.
MLA Handbook (7th Edition):
Talluri, Rajesh. “Bayesian Gaussian Graphical models using sparse selection priors and their mixtures.” 2012. Web. 07 Mar 2021.
Vancouver:
Talluri R. Bayesian Gaussian Graphical models using sparse selection priors and their mixtures. [Internet] [Doctoral dissertation]. Texas A&M University; 2012. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9828.
Council of Science Editors:
Talluri R. Bayesian Gaussian Graphical models using sparse selection priors and their mixtures. [Doctoral Dissertation]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9828

Texas A&M University
28.
Rahman, Shahina.
Efficient Nonparametric and Semiparametric Regression Methods with application in Case-Control Studies.
Degree: PhD, Statistics, 2015, Texas A&M University
URL: http://hdl.handle.net/1969.1/155719
► Regression Analysis is one of the most important tools of statistics which is widely used in other scientific fields for projection and modeling of association…
(more)
▼ Regression Analysis is one of the most important tools of statistics which is widely used in other scientific fields for projection and modeling of association between two variables. Nowadays with modern computing techniques and super high performance devices, regression analysis on multiple dimensions has become an important issue. Our task is to address the issue of modeling with no assumption on the mean and the variance structure and further with no assumption on the error distribution. In other words, we focus on developing robust semiparametric and nonparamteric regression problems. In modern genetic epidemiological association studies, it is often important to investigate the relationships among the potential covariates related to disease in case-control data, a study known as "Secondary Analysis". First we focus to model the association between the potential covariates in univariate dimension nonparametrically. Then we focus to model the association in mulivariate set up by assuming a convenient and popular multivariate semiparametric model, known as Single-Index Model. The secondary analysis of case-control studies is particularly challenging due to multiple reasons (a) the case-control sample is not a random sample, (b) the logistic intercept is practically not identifiable and (c) misspecification of error distribution leads to inconsistent results. For rare disease, controls (individual free of disease) are typically used for valid estimation. However, numerous publication are done to utilize the entire case-control sample (including the diseased individual) to increase the efficiency. Previous work in this context has either specified a fully parametric distribution for regression errors or specified a homoscedastic distribution for the regression errors or have assumed parametric forms on the regression mean.
In the first chapter we focus on to predict an univariate covariate Y by another potential univariate covariate X neither by any parametric form on the mean function nor by any distributional assumption on error, hence addressing potential heteroscedasticity, a problem which has not been studied before. We develop a tilted Kernel based estimator which is a first attempt to model the mean function nonparametrically in secondary analysis. In the following chapters, we focus on i.i.d samples to model both the mean and variance function for predicting Y by multiple covariates X without assuming any form on the regression mean. In particular we
model Y by a single-index model
m(X^T ϴ), where ϴ is a single-index vector and
m is unspecified. We also model the variance function by another flexible single index model. We develop a practical and readily applicable Bayesian methodology based on penalized spline and Markov Chain Monte Carlo (MCMC) both in i.i.d set up and in case-control set up. For efficient estimation, we model the error distribution by a Dirichlet process mixture models of Normals (DPMM). In numerical examples, we illustrate the finite sample performance of the posterior estimates for…
Advisors/Committee Members: Carroll, Raymond J. (advisor), Ma, Yanyuan (advisor), Smith, Roger (committee member), Harknett, Urshi Mueller (committee member), Mallick, Bani K. (committee member).
Subjects/Keywords: Bayesian Methods; Case-control; Dirichlet Process of Mixture Model; Efficiency; Heteroscedasticity; Kernel estimation; Nonparametric; P-splines; Robust; Secondary Analysis; Semiparametric; Single-Index Model
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rahman, S. (2015). Efficient Nonparametric and Semiparametric Regression Methods with application in Case-Control Studies. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/155719
Chicago Manual of Style (16th Edition):
Rahman, Shahina. “Efficient Nonparametric and Semiparametric Regression Methods with application in Case-Control Studies.” 2015. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/155719.
MLA Handbook (7th Edition):
Rahman, Shahina. “Efficient Nonparametric and Semiparametric Regression Methods with application in Case-Control Studies.” 2015. Web. 07 Mar 2021.
Vancouver:
Rahman S. Efficient Nonparametric and Semiparametric Regression Methods with application in Case-Control Studies. [Internet] [Doctoral dissertation]. Texas A&M University; 2015. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/155719.
Council of Science Editors:
Rahman S. Efficient Nonparametric and Semiparametric Regression Methods with application in Case-Control Studies. [Doctoral Dissertation]. Texas A&M University; 2015. Available from: http://hdl.handle.net/1969.1/155719

Texas A&M University
29.
Vyas, Aditya.
Application of Machine Learning in Well Performance Prediction, Design Optimization and History Matching.
Degree: PhD, Petroleum Engineering, 2017, Texas A&M University
URL: http://hdl.handle.net/1969.1/187248
► Finite difference based reservoir simulation is commonly used to predict well rates in these reservoirs. Such detailed simulation requires an accurate knowledge of reservoir geology.…
(more)
▼ Finite difference based reservoir simulation is commonly used to predict well rates in these reservoirs. Such detailed simulation requires an accurate knowledge of reservoir geology. Also, these reservoir simulations may be very costly in terms of computational time. Recently, some studies have used the concept of machine learning to predict mean or maximum production rates for new wells by utilizing available well production and completion data in a given field. However, these studies cannot predict well rates as a function of time. This dissertation tries to fill this gap by successfully applying various machine learning algorithms to predict well decline rates as a function of time. This is achieved by utilizing available multiple well data (well production, completion and location data) to build machine learning models for making rate decline predictions for the new wells. It is concluded from this study that well completion and location variables can be successfully correlated to decline curve model parameters and Estimated Ultimate Recovery (EUR) with a reasonable accuracy. Among the various machine learning models studied, the Support Vector Machine (SVM) algorithm in conjunction with the Stretched Exponential Decline Model (SEDM) was concluded to be the best predictor for well rate decline. This machine learning method is very fast compared to reservoir simulation and does not require a detailed reservoir information. Also, this method can be used to fast predict rate declines for more than one well at the same time.
This dissertation also investigates the problem of hydraulic fracture design optimization in unconventional reservoirs. Previous studies have concentrated mainly on optimizing hydraulic fractures in a given permeability field which may not be accurately known. Also, these studies do not take into account the trade-off between the revenue generated from a given fracture design and the cost involved in having that design. This dissertation study fills these gaps by utilizing a Genetic Algorithm (GA) based workflow which can find the most suitable fracturing design (fracture locations, half-lengths and widths) for a given unconventional reservoir by maximizing the Net Present Value (NPV). It is concluded that this method can optimize hydraulic fracture placement in the presence of natural fracture/permeability uncertainty. It is also concluded that this method results in a much higher NPV compared to an equally spaced hydraulic fractures with uniform fracture dimensions.
Another problem under investigation in this dissertation is that of field scale history matching in unconventional shale oil reservoirs. Stochastic optimization methods are commonly used in history matching problems requiring a large number of forward simulations due to the presence of a number of uncertain variables with unrefined variable ranges. Previous studies commonly used a single stage history matching. This study presents a method utilizing multiple stages of GA. Most significant variables are separated out from the…
Advisors/Committee Members: Datta-Gupta, Akhil (advisor), King, Michael J. (committee member), Mallick, Bani K. (committee member), McVay, Duane A. (committee member).
Subjects/Keywords: Unconventional Reservoirs; Machine Learning; Data Analytics; Decline Curves; Hydraulic Fracture Optimization; History Matching
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Vyas, A. (2017). Application of Machine Learning in Well Performance Prediction, Design Optimization and History Matching. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/187248
Chicago Manual of Style (16th Edition):
Vyas, Aditya. “Application of Machine Learning in Well Performance Prediction, Design Optimization and History Matching.” 2017. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/187248.
MLA Handbook (7th Edition):
Vyas, Aditya. “Application of Machine Learning in Well Performance Prediction, Design Optimization and History Matching.” 2017. Web. 07 Mar 2021.
Vancouver:
Vyas A. Application of Machine Learning in Well Performance Prediction, Design Optimization and History Matching. [Internet] [Doctoral dissertation]. Texas A&M University; 2017. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/187248.
Council of Science Editors:
Vyas A. Application of Machine Learning in Well Performance Prediction, Design Optimization and History Matching. [Doctoral Dissertation]. Texas A&M University; 2017. Available from: http://hdl.handle.net/1969.1/187248
30.
Stripling, Hayes Franklin.
The Method of Manufactured Universes for Testing Uncertainty Quantification Methods.
Degree: MS, Nuclear Engineering, 2011, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8986
► The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of…
(more)
▼ The Method of Manufactured Universes is presented as a validation framework for
uncertainty quantification (UQ) methodologies and as a tool for exploring the effects
of statistical and modeling assumptions embedded in these methods. The framework
calls for a manufactured reality from which "experimental" data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which
simulation results are created (possibly with numerical error), the application of a
system for quantifying uncertainties in model predictions, and an assessment of how
accurately those uncertainties are quantified. The application presented for this research manufactures a particle-transport "universe," models it using diffusion theory
with uncertain material parameters, and applies both Gaussian process and Bayesian
MARS algorithms to make quantitative predictions about new "experiments" within
the manufactured reality. To test further the responses of these UQ methods, we
conduct exercises with "experimental" replicates, "measurement" error, and choices
of physical inputs that reduce the accuracy of the diffusion model's approximation
of our manufactured laws.
Our first application of MMU was rich in areas for exploration and highly informative. In the case of the Gaussian process code, we found that the fundamental
statistical formulation was not appropriate for our functional data, but that the code
allows a knowledgable user to vary parameters within this formulation to tailor its
behavior for a specific problem. The Bayesian MARS formulation was a more natural emulator given our manufactured laws, and we used the MMU framework to develop
further a calibration method and to characterize the diffusion model discrepancy.
Overall, we conclude that an MMU exercise with a properly designed universe (that
is, one that is an adequate representation of some real-world problem) will provide
the modeler with an added understanding of the interaction between a given UQ
method and his/her more complex problem of interest. The modeler can then apply
this added understanding and make more informed predictive statements.
Advisors/Committee Members: Adams, Marvin L. (advisor), McClarren, Ryan G. (committee member), Mallick, Bani K. (committee member).
Subjects/Keywords: Uncertainty Quantification; Validation; Bayesian Inversion; Calibration
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Stripling, H. F. (2011). The Method of Manufactured Universes for Testing Uncertainty Quantification Methods. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8986
Chicago Manual of Style (16th Edition):
Stripling, Hayes Franklin. “The Method of Manufactured Universes for Testing Uncertainty Quantification Methods.” 2011. Masters Thesis, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8986.
MLA Handbook (7th Edition):
Stripling, Hayes Franklin. “The Method of Manufactured Universes for Testing Uncertainty Quantification Methods.” 2011. Web. 07 Mar 2021.
Vancouver:
Stripling HF. The Method of Manufactured Universes for Testing Uncertainty Quantification Methods. [Internet] [Masters thesis]. Texas A&M University; 2011. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8986.
Council of Science Editors:
Stripling HF. The Method of Manufactured Universes for Testing Uncertainty Quantification Methods. [Masters Thesis]. Texas A&M University; 2011. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8986
◁ [1] [2] [3] ▶
.