You searched for +publisher:"University of Texas – Austin" +contributor:("Durrett, Gregory")
.
Showing records 1 – 3 of
3 total matches.
No search limiters apply to these results.
1.
Wang, Xinyu.
An efficient programming-by-example framework.
Degree: PhD, Computer Science, 2019, University of Texas – Austin
URL: http://dx.doi.org/10.26153/tsw/5854
Due to the ubiquity of computing, programming has started to become an essential skill for an increasing number of people, including data scientists, financial analysts, and spreadsheet users. While it is well known that building any complex and reliable software is difficult, writing even simple scripts is challenging for novices with no formal programming background. Therefore, there is an increasing need for technology that can provide basic programming support to non-expert computer end-users.
Program synthesis, as a technique for generating programs from high-level specifications such as input-output examples, has been used to automate many real-world programming tasks in a number of application domains such as spreadsheet programming and data science. However, developing specialized synthesizers for these application domains is notoriously hard.
This dissertation aims to make the development of program synthesizers easier so that we can expand the applicability of program synthesis to more application domains. In particular, this dissertation describes a programming-by-example framework that is both generic and efficient. This framework can be applied broadly to automating tasks across different application domains. It is also efficient and achieves orders of magnitude improvement in terms of the synthesis speed compared to existing state-of-the-art techniques.
Advisors/Committee Members: Dillig, Isil (advisor), Durrett, Gregory (committee member), Pingali, Keshav (committee member), Jhala, Ranjit (committee member), Naik, Mayur (committee member).
Subjects/Keywords: Programming languages; Program synthesis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, X. (2019). An efficient programming-by-example framework. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/5854
Chicago Manual of Style (16th Edition):
Wang, Xinyu. “An efficient programming-by-example framework.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed January 24, 2021.
http://dx.doi.org/10.26153/tsw/5854.
MLA Handbook (7th Edition):
Wang, Xinyu. “An efficient programming-by-example framework.” 2019. Web. 24 Jan 2021.
Vancouver:
Wang X. An efficient programming-by-example framework. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2021 Jan 24].
Available from: http://dx.doi.org/10.26153/tsw/5854.
Council of Science Editors:
Wang X. An efficient programming-by-example framework. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/5854

University of Texas – Austin
2.
-6301-1960.
Explainable improved ensembling for natural language and vision.
Degree: PhD, Computer science, 2019, University of Texas – Austin
URL: http://hdl.handle.net/2152/72820
Ensemble methods are well-known in machine learning for improving prediction
accuracy. However, they do not adequately discriminate among underlying
component models. The measure of how good a model is can sometimes be estimated
from “why” it made a specific prediction. We propose a novel approach
called Stacking With Auxiliary Features (SWAF) that effectively leverages component
models by integrating such relevant information from context to improve
ensembling. Using auxiliary features, our algorithm learns to rely on systems that
not just agree on an output prediction but also the source or origin of that output.
We demonstrate our approach to challenging structured prediction problems
in Natural Language Processing and Vision including Information Extraction, Object
Detection, and Visual Question Answering. We also present a variant of SWAF
for combining systems that do not have training data in an unsupervised ensemble
with systems that do have training data. Our combined approach obtains a new
state-of-the-art, beating our prior performance on Information Extraction.
The state-of-the-art systems on many AI applications are ensembles of deeplearning
models. These models are hard to interpret and can sometimes make odd
mistakes. Explanations make AI systems more transparent and also justify their
predictions. We propose a scalable approach to generate visual explanations for
ensemble methods using the localization maps of the component systems. Crowdsourced
human evaluation on two new metrics indicates that our ensemble’s explanation
significantly qualitatively outperforms individual systems’ explanations.
Advisors/Committee Members: Mooney, Raymond J. (Raymond Joseph) (advisor), Erk, Katrin (committee member), Durrett, Gregory (committee member), Barker, Kenneth (committee member).
Subjects/Keywords: Explainable AI; NLP; Computer Vision; Stacking
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-6301-1960. (2019). Explainable improved ensembling for natural language and vision. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/72820
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-6301-1960. “Explainable improved ensembling for natural language and vision.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed January 24, 2021.
http://hdl.handle.net/2152/72820.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-6301-1960. “Explainable improved ensembling for natural language and vision.” 2019. Web. 24 Jan 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-6301-1960. Explainable improved ensembling for natural language and vision. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2021 Jan 24].
Available from: http://hdl.handle.net/2152/72820.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-6301-1960. Explainable improved ensembling for natural language and vision. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://hdl.handle.net/2152/72820
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
3.
Zhang, Ye, 1989-.
Neural NLP models under low-supervision scenarios.
Degree: PhD, Computer Science, 2019, University of Texas – Austin
URL: http://dx.doi.org/10.26153/tsw/2140
Neural models have been shown to work well for natural language processing
tasks when one has large amounts of labeled data, but problems arise when this is not the case. In this thesis we investigate several ‘low-supervision’ scenarios in which we do not have sufficient training data, and we propose methods to improve performance in these scenarios.
First, we consider the scenario where we can use other types of resources in
addition to the limited training labels. For instance, we can ask human annotators to provide rationales supporting their labels (annotations) for training examples. To capitalize on such supervision, we develop a neural model that can train on both instance labels and associated rationales. We also investigate how to incorporate existing ontologies into neural models. Specifically, we develop a novel training algorithm that enforces weight sharing among similar words in the ontologies, thus inductively biasing the neural model training.
In addition incorporating other types of resources beyond instance labels, we also use transfer learning techniques which are general means of learning in
low-supervision settings. We study how to use multiple sets of pre-trained word
embeddings as inputs to neural models, and fine-tune them to the task at hand in
a more intelligent way than simply concatenating them. We also develop a novel model for text generation, in which the model is able to generate text from a new domain (unseen in training data). Rather than simply fine-tuning the model on the target domain, the model fully uses the domain information in the training set, allowing it to generate domain specific text.
Lastly, we consider how to collect data under a limited budget more efficiently than simply random selection of unlabeled data for annotation. We develop new active learning (AL) methods to collect more informative examples to be annotated specifically for neural models, so that better models and more discriminative text representation can be learned with fewer labels. Following this, we further develop new AL approaches when we have richly annotated data from a relevant
domain, that is, we combine AL and transfer learning and leverage the advantages of both methods. We also investigate how to use the pre-trained deep bidirectional transformer (BERT) to actively select labels.
Advisors/Committee Members: Lease, Matthew A. (advisor), Wallace, Byron C. (advisor), Durrett, Gregory C (committee member), Mooney, Raymond J (committee member).
Subjects/Keywords: Neural models; Natural language processing; Low-supervision
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhang, Ye, 1. (2019). Neural NLP models under low-supervision scenarios. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/2140
Chicago Manual of Style (16th Edition):
Zhang, Ye, 1989-. “Neural NLP models under low-supervision scenarios.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed January 24, 2021.
http://dx.doi.org/10.26153/tsw/2140.
MLA Handbook (7th Edition):
Zhang, Ye, 1989-. “Neural NLP models under low-supervision scenarios.” 2019. Web. 24 Jan 2021.
Vancouver:
Zhang, Ye 1. Neural NLP models under low-supervision scenarios. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2021 Jan 24].
Available from: http://dx.doi.org/10.26153/tsw/2140.
Council of Science Editors:
Zhang, Ye 1. Neural NLP models under low-supervision scenarios. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/2140
.