You searched for subject:(Deep learning)
.
Showing records 1 – 30 of
3078 total matches.
◁ [1] [2] [3] [4] [5] … [103] ▶

Oregon State University
1.
Ghaeini, Mohammad Reza.
Event Detection with Forward-Backward Recurrent Neural Networks.
Degree: MS, 2017, Oregon State University
URL: http://hdl.handle.net/1957/61576
► Automatic event extraction from natural text is an important and challenging task for natural language understanding. Traditional event detection methods heavily rely on manually engineered…
(more)
▼ Automatic event extraction from natural text is an important and challenging task for natural language understanding. Traditional event detection methods heavily rely on manually engineered rich features. Recent
deep learning approaches alleviate this problem by automatic feature engineering. But such efforts, like tradition methods, have so far only focused on single-token event mentions, whereas in practice events can also be described with a phrase. In this thesis, we introduce and apply forward-backward recurrent neural networks (FBRNNs) to detect events that can be either words or phrases. Experimental results demonstrate that FBRNN is competitive with the state-of-the-art methods on the ACE 2005 and the Rich ERE 2015 event detection tasks.
Advisors/Committee Members: Fern, Xiaoli Z. (advisor), Tadepalli, Prasad (committee member).
Subjects/Keywords: Deep Learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ghaeini, M. R. (2017). Event Detection with Forward-Backward Recurrent Neural Networks. (Masters Thesis). Oregon State University. Retrieved from http://hdl.handle.net/1957/61576
Chicago Manual of Style (16th Edition):
Ghaeini, Mohammad Reza. “Event Detection with Forward-Backward Recurrent Neural Networks.” 2017. Masters Thesis, Oregon State University. Accessed February 27, 2021.
http://hdl.handle.net/1957/61576.
MLA Handbook (7th Edition):
Ghaeini, Mohammad Reza. “Event Detection with Forward-Backward Recurrent Neural Networks.” 2017. Web. 27 Feb 2021.
Vancouver:
Ghaeini MR. Event Detection with Forward-Backward Recurrent Neural Networks. [Internet] [Masters thesis]. Oregon State University; 2017. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1957/61576.
Council of Science Editors:
Ghaeini MR. Event Detection with Forward-Backward Recurrent Neural Networks. [Masters Thesis]. Oregon State University; 2017. Available from: http://hdl.handle.net/1957/61576

California State Polytechnic University – Pomona
2.
Frank, Hakeem.
Gaussian Process Models for Computer Vision.
Degree: MS, Department of Mathematics and Statistics, 2020, California State Polytechnic University – Pomona
URL: http://hdl.handle.net/10211.3/216857
► Supervised learning is the task of finding a function f(x) that maps an input x to an output y using observed data. Gaussian process models…
(more)
▼ Supervised
learning is the task of finding a function f(x) that maps an input x to an output y using observed data. Gaussian process models approach supervised
learning by assuming a probability distribution over a space of possible functions, using observed data to update the space of functions to consider using Bayes the- orem, and taking the expected value over the space of functions to get an estimate for f(x). While Gaussian process models are commonly used in time series and regression domains, they can extend to classification tasks using a response function and variational inference. This thesis investigates Gaussian process models for image classification tasks with an emphasis on kernels that are effective for the high dimensional nature of image data. Specifically, stationary and non-stationary kernels are compared with each other and their performance is analyzed on image recognition tasks. The models are evaluated on high-resolution aerial images, a handwritten digit dataset, and a dataset of X-ray images of patients exhibiting signs of pneumonia.
Advisors/Committee Members: Risk, Jimmy (advisor), King, Adam (committee member).
Subjects/Keywords: deep learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Frank, H. (2020). Gaussian Process Models for Computer Vision. (Masters Thesis). California State Polytechnic University – Pomona. Retrieved from http://hdl.handle.net/10211.3/216857
Chicago Manual of Style (16th Edition):
Frank, Hakeem. “Gaussian Process Models for Computer Vision.” 2020. Masters Thesis, California State Polytechnic University – Pomona. Accessed February 27, 2021.
http://hdl.handle.net/10211.3/216857.
MLA Handbook (7th Edition):
Frank, Hakeem. “Gaussian Process Models for Computer Vision.” 2020. Web. 27 Feb 2021.
Vancouver:
Frank H. Gaussian Process Models for Computer Vision. [Internet] [Masters thesis]. California State Polytechnic University – Pomona; 2020. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10211.3/216857.
Council of Science Editors:
Frank H. Gaussian Process Models for Computer Vision. [Masters Thesis]. California State Polytechnic University – Pomona; 2020. Available from: http://hdl.handle.net/10211.3/216857

Universidad de Cantabria
3.
Noriega Puente, Andrea.
Segmentación de gliomas en imagen de resonancia magnética multimodal: Glioma segmentation in multimodal magnetic resonance imaging.
Degree: Máster en Ciencia de Datos, 2019, Universidad de Cantabria
URL: http://hdl.handle.net/10902/17859
► RESUMEN: El glioma es el tipo de tumor cerebral más común, presentando distintos grados de malignidad y agresividad, así como un pronóstico variable. La gran…
(more)
▼ RESUMEN: El glioma es el tipo de tumor cerebral más común, presentando distintos grados de malignidad y agresividad, así como un pronóstico variable. La gran variabilidad que caracteriza a estas lesiones implica un estudio completamente único, así como un tratamiento personalizado. Es por esto que la segmentación manual de este tipo de lesiones supone tanto un gran consumo de tiempo como un trabajo duro para el experto, el cual implica el análisis multimodal de varias secuencias diferentes, normalmente, de Resonancia Magnética.
En este trabajo se presenta un modelo de segmentación automática de gliomas de alto grado, HGG, cuya arquitectura se basa en el empleo de redes neuronales convolucionales simétricas, denominada U-Net. Para el entrenamiento del modelo se ha hecho uso del conjunto de datos del Data Challenge BraTS 2018 (Brain Tumor Segmentation), compuesto por un total de 237 estudios de IRM, 162 HGG y 75 LGG, de los que solo se han empleado los primeros. Los resultados en cuanto a la obtención de mapas binarios de segmentación de los tumores han sido positivos. Se ha calculado el coeficiente DICE medio, relacionado con la calidad de la determinación de la forma del tumor, logrando 0:64 +- 0:39 para el modelo entrenado con imágenes T1, y 0:57 +- 0:41 para el modelo entrenado con secuencias T1 con contraste. Cualitativamente se ha observado además un buen desempeño a la hora de localizar espacialmente los tumores, incluso si la forma no es determinada tan precisamente.
Advisors/Committee Members: Rodríguez González, David (advisor), Universidad de Cantabria (other).
Subjects/Keywords: Deep Learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Noriega Puente, A. (2019). Segmentación de gliomas en imagen de resonancia magnética multimodal: Glioma segmentation in multimodal magnetic resonance imaging. (Masters Thesis). Universidad de Cantabria. Retrieved from http://hdl.handle.net/10902/17859
Chicago Manual of Style (16th Edition):
Noriega Puente, Andrea. “Segmentación de gliomas en imagen de resonancia magnética multimodal: Glioma segmentation in multimodal magnetic resonance imaging.” 2019. Masters Thesis, Universidad de Cantabria. Accessed February 27, 2021.
http://hdl.handle.net/10902/17859.
MLA Handbook (7th Edition):
Noriega Puente, Andrea. “Segmentación de gliomas en imagen de resonancia magnética multimodal: Glioma segmentation in multimodal magnetic resonance imaging.” 2019. Web. 27 Feb 2021.
Vancouver:
Noriega Puente A. Segmentación de gliomas en imagen de resonancia magnética multimodal: Glioma segmentation in multimodal magnetic resonance imaging. [Internet] [Masters thesis]. Universidad de Cantabria; 2019. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10902/17859.
Council of Science Editors:
Noriega Puente A. Segmentación de gliomas en imagen de resonancia magnética multimodal: Glioma segmentation in multimodal magnetic resonance imaging. [Masters Thesis]. Universidad de Cantabria; 2019. Available from: http://hdl.handle.net/10902/17859

California State Polytechnic University – Pomona
4.
Shimpi, Shubhangi.
Deep Recurrent Neural Networks for Seizure Prediction in Epileptic Patients.
Degree: MS, Department of Computer Science, 2018, California State Polytechnic University – Pomona
URL: http://hdl.handle.net/10211.3/199949
► Electroencephalogram (EEG) data includes information of electrical activity of a brain; thus is commonly used to diagnose any underlying neurological condition such as epilepsy. Epileptic…
(more)
▼ Electroencephalogram (EEG) data includes information of electrical activity of a brain; thus is commonly used to diagnose any underlying neurological condition such as epilepsy. Epileptic patients are at risk of facing life threatening incidents when driving a vehicle or handling machinery. Hence it is important to detect the phases when the patients are more likely to have seizures. Manual detection of seizures is expensive because it involves visual examination of hours long of EEG data. There is a need of built-in seizure detection and prediction system to automatically classify the EEG signals into various types of signals using machine-
learning techniques. In this project, we present a seizure detection model and seizure prediction model using recurrent neural networks (RNN). We propose various
deep RNN models to predict seizures. To perform preliminary evaluation the accuracy for each of the model is evaluated using a publicly available dataset. LSTM model correctly classifies the EEG data with prediction accuracy of 96% whereas the lightweight GRU model demonstrates promising results with overall prediction accuracy of 98%.
Advisors/Committee Members: Jayarathna, Sampath (advisor), Sun, Yu (committee member).
Subjects/Keywords: deep learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shimpi, S. (2018). Deep Recurrent Neural Networks for Seizure Prediction in Epileptic Patients. (Masters Thesis). California State Polytechnic University – Pomona. Retrieved from http://hdl.handle.net/10211.3/199949
Chicago Manual of Style (16th Edition):
Shimpi, Shubhangi. “Deep Recurrent Neural Networks for Seizure Prediction in Epileptic Patients.” 2018. Masters Thesis, California State Polytechnic University – Pomona. Accessed February 27, 2021.
http://hdl.handle.net/10211.3/199949.
MLA Handbook (7th Edition):
Shimpi, Shubhangi. “Deep Recurrent Neural Networks for Seizure Prediction in Epileptic Patients.” 2018. Web. 27 Feb 2021.
Vancouver:
Shimpi S. Deep Recurrent Neural Networks for Seizure Prediction in Epileptic Patients. [Internet] [Masters thesis]. California State Polytechnic University – Pomona; 2018. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10211.3/199949.
Council of Science Editors:
Shimpi S. Deep Recurrent Neural Networks for Seizure Prediction in Epileptic Patients. [Masters Thesis]. California State Polytechnic University – Pomona; 2018. Available from: http://hdl.handle.net/10211.3/199949

University of Sydney
5.
Windrim, Lloyd.
Illumination Invariant Deep Learning for Hyperspectral Data
.
Degree: 2018, University of Sydney
URL: http://hdl.handle.net/2123/18734
► Motivated by the variability in hyperspectral images due to illumination and the difficulty in acquiring labelled data, this thesis proposes different approaches for learning illumination…
(more)
▼ Motivated by the variability in hyperspectral images due to illumination and the difficulty in acquiring labelled data, this thesis proposes different approaches for learning illumination invariant feature representations and classification models for hyperspectral data captured outdoors, under natural sunlight. The approaches integrate domain knowledge into learning algorithms and hence does not rely on a priori knowledge of atmospheric parameters, additional sensors or large amounts of labelled training data. Hyperspectral sensors record rich semantic information from a scene, making them useful for robotics or remote sensing applications where perception systems are used to gain an understanding of the scene. Images recorded by hyperspectral sensors can, however, be affected to varying degrees by intrinsic factors relating to the sensor itself (keystone, smile, noise, particularly at the limits of the sensed spectral range) but also by extrinsic factors such as the way the scene is illuminated. The appearance of the scene in the image is tied to the incident illumination which is dependent on variables such as the position of the sun, geometry of the surface and the prevailing atmospheric conditions. Effects like shadows can make the appearance and spectral characteristics of identical materials to be significantly different. This degrades the performance of high-level algorithms that use hyperspectral data, such as those that do classification and clustering. If sufficient training data is available, learning algorithms such as neural networks can capture variability in the scene appearance and be trained to compensate for it. Learning algorithms are advantageous for this task because they do not require a priori knowledge of the prevailing atmospheric conditions or data from additional sensors. Labelling of hyperspectral data is, however, difficult and time-consuming, so acquiring enough labelled samples for the learning algorithm to adequately capture the scene appearance is challenging. Hence, there is a need for the development of techniques that are invariant to the effects of illumination that do not require large amounts of labelled data. In this thesis, an approach to learning a representation of hyperspectral data that is invariant to the effects of illumination is proposed. This approach combines a physics-based model of the illumination process with an unsupervised deep learning algorithm, and thus requires no labelled data. Datasets that vary both temporally and spatially are used to compare the proposed approach to other similar state-of-the-art techniques. The results show that the learnt representation is more invariant to shadows in the image and to variations in brightness due to changes in the scene topography or position of the sun in the sky. The results also show that a supervised classifier can predict class labels more accurately and more consistently across time when images are represented using the proposed method. Additionally, this thesis proposes methods to train supervised…
Subjects/Keywords: Deep learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Windrim, L. (2018). Illumination Invariant Deep Learning for Hyperspectral Data
. (Thesis). University of Sydney. Retrieved from http://hdl.handle.net/2123/18734
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Windrim, Lloyd. “Illumination Invariant Deep Learning for Hyperspectral Data
.” 2018. Thesis, University of Sydney. Accessed February 27, 2021.
http://hdl.handle.net/2123/18734.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Windrim, Lloyd. “Illumination Invariant Deep Learning for Hyperspectral Data
.” 2018. Web. 27 Feb 2021.
Vancouver:
Windrim L. Illumination Invariant Deep Learning for Hyperspectral Data
. [Internet] [Thesis]. University of Sydney; 2018. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/2123/18734.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Windrim L. Illumination Invariant Deep Learning for Hyperspectral Data
. [Thesis]. University of Sydney; 2018. Available from: http://hdl.handle.net/2123/18734
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Oxford
6.
Lee, Namhoon.
Toward efficient deep learning with sparse neural networks.
Degree: PhD, 2020, University of Oxford
URL: http://ora.ox.ac.uk/objects/uuid:000e9d44-0229-48a3-84b0-dc17a8e96ccf
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.820743
► Despite the tremendous success that deep learning has achieved in recent years, it remains challenging to deal with the excessive computational and memory cost involved…
(more)
▼ Despite the tremendous success that deep learning has achieved in recent years, it remains challenging to deal with the excessive computational and memory cost involved in executing deep learning based applications. To address the challenge, this thesis focuses on studying sparse neural networks, particularly around their construction, initialization, and large-scale training aspects, as an attempt to take a step toward efficient deep learning. Firstly, this thesis addresses the problem of finding sparse neural networks by pruning. Network pruning is an effective methodology to sparsify neural networks, and yet, existing approaches often introduce hyperparameters that either need to be tuned with expert knowledge or are based on ad-hoc intuitions, and typically entails iterative training steps. Alternatively, this thesis begins with proposing an efficient pruning method that is applied to a neural network prior to training in a single shot. The obtained sparse neural network using this method, once trained, exhibit state-of-the-art performance on various image classification tasks. Albeit efficient, it remains unclear exactly why this approach of pruning at initialization can be effective. This thesis then extends this method by developing a new perspective, from which the problem of finding trainable sparse neural networks is approached based on network initialization. Being a key to the success of finding and training sparse neural networks, this thesis proposes a sufficient initialization condition that can be easily satisfied with a simple optimization step and, once achieved, accelerates training sparse neural networks quite significantly. While sparse neural networks can be obtained by pruning at initialization, there has been little study concerning the subsequent training of these sparse networks. This thesis lastly concentrates on studying data parallelism – a straightforward approach to speed up neural network training by parallelizing it using a distributed computing system – under the influence of sparsity. To this end, the effects of data parallelism and sparsity are first measured accurately based on extensive experiments which are accompanied by metaparameter search. Then, this thesis establishes theoretical results that precisely account for these effects, which have only been addressed partially and empirically and thus remained as debatable.
Subjects/Keywords: Deep learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lee, N. (2020). Toward efficient deep learning with sparse neural networks. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:000e9d44-0229-48a3-84b0-dc17a8e96ccf ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.820743
Chicago Manual of Style (16th Edition):
Lee, Namhoon. “Toward efficient deep learning with sparse neural networks.” 2020. Doctoral Dissertation, University of Oxford. Accessed February 27, 2021.
http://ora.ox.ac.uk/objects/uuid:000e9d44-0229-48a3-84b0-dc17a8e96ccf ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.820743.
MLA Handbook (7th Edition):
Lee, Namhoon. “Toward efficient deep learning with sparse neural networks.” 2020. Web. 27 Feb 2021.
Vancouver:
Lee N. Toward efficient deep learning with sparse neural networks. [Internet] [Doctoral dissertation]. University of Oxford; 2020. [cited 2021 Feb 27].
Available from: http://ora.ox.ac.uk/objects/uuid:000e9d44-0229-48a3-84b0-dc17a8e96ccf ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.820743.
Council of Science Editors:
Lee N. Toward efficient deep learning with sparse neural networks. [Doctoral Dissertation]. University of Oxford; 2020. Available from: http://ora.ox.ac.uk/objects/uuid:000e9d44-0229-48a3-84b0-dc17a8e96ccf ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.820743

Penn State University
7.
Gupta, Samarth.
Supervised Machine Learning for Region Assignment of Zebrafish Brain Nuclei based on Computational Assessment of Cell Neighborhoods.
Degree: 2020, Penn State University
URL: https://submit-etda.libraries.psu.edu/catalog/17771sxg646
► Histological studies provide cellular insights into tissue architecture and have been central to phenotyping and biological discovery. Synchrotron X-ray micro tomography of tissue, or “X-ray…
(more)
▼ Histological studies provide cellular insights into tissue architecture and have
been central to phenotyping and biological discovery. Synchrotron X-ray micro
tomography of tissue, or “X-ray histotomography”, yields three-dimensional
reconstruction of fixed and stained specimens without sectioning. These reconstructions
permit the computational creation of histology-like sections in any user-defined plane and
slice thickness. Furthermore, they provide an exciting new basis for volumetric,
computational histological phenotyping at cellular resolution. In this paper, we
demonstrate the computational characterization of the zebrafish central nervous system
imaged by Synchrotron X-ray micro-CT through the classification of small cellular
neighborhood volumes centered at each detected nucleus in a 3D tomographic
reconstruction. First, we implement a
deep learning-based nucleus detector to detect
nuclear centroids. We then develop, train, and test a convolutional neural network
architecture for automatic classification of brain nuclei into even different tissue regions
using five different neighborhood sizes containing 8, 12, 16, 20 and 24 isotropic voxels
(0.743 x 0.743 x 0.743 μm each), corresponding to boxes with 5.944, 8.916, 11.89, 14.86,
and 17.83 μm sides, respectively. We show that even with small cell neighborhoods, our
proposed model is able to characterize brain nuclei into the major tissue regions with F1
score of 81.14% and sensitivity of 80.53%. Using our detector and classifier, we obtained
very good results for fully segmenting major zebrafish brain regions in the 3D scan
through patch wise labeling of cell neighborhoods.
Advisors/Committee Members: Sharon Xiaolei Huang, Thesis Advisor/Co-Advisor, Mary Beth Rosson, Program Head/Chair, Suhang Wang, Committee Member, Fenglong Ma, Committee Member.
Subjects/Keywords: Deep Learning; Microtomography
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gupta, S. (2020). Supervised Machine Learning for Region Assignment of Zebrafish Brain Nuclei based on Computational Assessment of Cell Neighborhoods. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/17771sxg646
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Gupta, Samarth. “Supervised Machine Learning for Region Assignment of Zebrafish Brain Nuclei based on Computational Assessment of Cell Neighborhoods.” 2020. Thesis, Penn State University. Accessed February 27, 2021.
https://submit-etda.libraries.psu.edu/catalog/17771sxg646.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Gupta, Samarth. “Supervised Machine Learning for Region Assignment of Zebrafish Brain Nuclei based on Computational Assessment of Cell Neighborhoods.” 2020. Web. 27 Feb 2021.
Vancouver:
Gupta S. Supervised Machine Learning for Region Assignment of Zebrafish Brain Nuclei based on Computational Assessment of Cell Neighborhoods. [Internet] [Thesis]. Penn State University; 2020. [cited 2021 Feb 27].
Available from: https://submit-etda.libraries.psu.edu/catalog/17771sxg646.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Gupta S. Supervised Machine Learning for Region Assignment of Zebrafish Brain Nuclei based on Computational Assessment of Cell Neighborhoods. [Thesis]. Penn State University; 2020. Available from: https://submit-etda.libraries.psu.edu/catalog/17771sxg646
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Texas A&M University
8.
Vallamkonda, Abhilash Rajendra Babu.
Model Attack on Convolutional Neural Networks.
Degree: MS, Computer Science, 2019, Texas A&M University
URL: http://hdl.handle.net/1969.1/188808
► Deep learning is a machine learning technique that enables computers to learn directly from images, text, or sound in the same way that people do.…
(more)
▼ Deep learning is a machine
learning technique that enables computers to learn directly from images, text, or sound in the same way that people do. It is a key technology which enables selfdriving cars and speech recognition. In the past few years,
deep learning has been successfully used in a wide range of applications and has demonstrated results beyond what computers were thought to be capable of. This new technology is poised to change the way we live. Despite the successes, the exact working of
deep learning models is not well-understood, and they can fail in several unintuitive ways. One such vulnerability is that small modifications to the input, which might not even be noticeable for humans, are enough to fool these models. This vulnerability has received significant attention from the research community and is a well-studied problem. Our focus is the scenario where the parameters of the model, rather than its inputs, are maliciously modified.
Deep learning models contain a large number of parameters that interact with each other in complex ways, so small perturbations to a large number of parameters can produce a cumulative effect, causing the model to misbehave. Further, noise inherent in practical systems can act as a camouflage for such malicious perturbations, making it difficult to detect them. Even though
deep learning models have produced amazing results, their vulnerabilities present a serious concern that must be overcome before they can be deployed in practical systems. In this work, we evaluate the threat of attackers maliciously modifying the model parameters to compromise the model. We demonstrate that small perturbations to the parameters are enough to compromise the model without significantly affecting its performance. We also study the characteristics of these malicious perturbations and devise a strategy to detect such an attack.
Advisors/Committee Members: Jiang, Anxiao (advisor), Liu, Tie (committee member), Huang, Ruihong (committee member).
Subjects/Keywords: Deep Learning; Security
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Vallamkonda, A. R. B. (2019). Model Attack on Convolutional Neural Networks. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/188808
Chicago Manual of Style (16th Edition):
Vallamkonda, Abhilash Rajendra Babu. “Model Attack on Convolutional Neural Networks.” 2019. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/188808.
MLA Handbook (7th Edition):
Vallamkonda, Abhilash Rajendra Babu. “Model Attack on Convolutional Neural Networks.” 2019. Web. 27 Feb 2021.
Vancouver:
Vallamkonda ARB. Model Attack on Convolutional Neural Networks. [Internet] [Masters thesis]. Texas A&M University; 2019. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/188808.
Council of Science Editors:
Vallamkonda ARB. Model Attack on Convolutional Neural Networks. [Masters Thesis]. Texas A&M University; 2019. Available from: http://hdl.handle.net/1969.1/188808

Georgia Tech
9.
Choi, Edward.
Doctor AI: Interpretable deep learning for modeling electronic health records.
Degree: PhD, Computational Science and Engineering, 2018, Georgia Tech
URL: http://hdl.handle.net/1853/60226
► Deep learning recently has been showing superior performance in complex domains such as computer vision, audio processing and natural language processing compared to traditional statistical…
(more)
▼ Deep learning recently has been showing superior performance in complex domains such as computer vision, audio processing and natural language processing compared to traditional statistical methods. Naturally,
deep learning techniques, combined with large electronic health records (EHR) data generated from healthcare organizations have potential to bring dramatic changes to the healthcare industry. However, typical
deep learning models can be seen as highly expressive blackboxes, making them difficult to be adopted in real-world healthcare applications due to lack of interpretability. In order for
deep learning methods to be readily adopted by real-world clinical practices, they must be interpretable without sacrificing their prediction accuracy. In this thesis, we propose interpretable and accurate
deep learning methods for modeling EHR, specifically focusing on longitudinal EHR data. We will be- gin with a direct application of a well-known
deep learning algorithm, recurrent neural networks (RNN), to capture the temporal nature of longitudinal EHR. Then, based on the initial approach we develop interpretable
deep learning models by focusing on three aspects of computational healthcare: efficient representation
learning of medical concepts, code-level interpretation for sequence predictions, and leveraging domain knowledge into the model. Another important aspect that we will address in this thesis is developing a framework for effectively utilizing multiple data sources (e.g. diagnoses, medications, procedures), which can be extended in the future to incorporate wider data modalities such as lab values and clinical notes.
Advisors/Committee Members: Sun, Jimeng (advisor), Duke, Jon (committee member), Eisenstein, Jacob (committee member), Rehg, James (committee member), Stewart, Walter F. (committee member).
Subjects/Keywords: Deep learning; Healthcare
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Choi, E. (2018). Doctor AI: Interpretable deep learning for modeling electronic health records. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/60226
Chicago Manual of Style (16th Edition):
Choi, Edward. “Doctor AI: Interpretable deep learning for modeling electronic health records.” 2018. Doctoral Dissertation, Georgia Tech. Accessed February 27, 2021.
http://hdl.handle.net/1853/60226.
MLA Handbook (7th Edition):
Choi, Edward. “Doctor AI: Interpretable deep learning for modeling electronic health records.” 2018. Web. 27 Feb 2021.
Vancouver:
Choi E. Doctor AI: Interpretable deep learning for modeling electronic health records. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1853/60226.
Council of Science Editors:
Choi E. Doctor AI: Interpretable deep learning for modeling electronic health records. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/60226

Cornell University
10.
Lenz, Ian.
Deep Learning For Robotics.
Degree: PhD, Computer Science, 2016, Cornell University
URL: http://hdl.handle.net/1813/44317
► Robotics faces many unique challenges as robotic platforms move out of the lab and into the real world. In particular, the huge amount of variety…
(more)
▼ Robotics faces many unique challenges as robotic platforms move out of the lab and into the real world. In particular, the huge amount of variety encountered in real-world environments is extremely challenging for existing robotic control algorithms to handle. This necessistates the use of machine
learning algorithms, which are able to learn controls given data. However, most conventional
learning algorithms require hand-designed parameterized models and features, which are infeasible to design for many robotic tasks.
Deep learning algorithms are general non-linear models which are able to learn features directly from data, making them an excellent choice for such robotics applications. However, care must be taken to design
deep learning algorithms and supporting systems appropriate for the task at hand. In this work, I describe two applications of
deep learning algorithms and one application of hardware neural networks to difficult robotics problems. The problems addressed are robotic grasping, food cutting, and aerial robot obstacle avoidance, but the algorithms presented are designed to be generalizable to related tasks.
Advisors/Committee Members: Saxena,Ashutosh (chair), Snavely,Keith Noah (committee member), Manohar,Rajit (committee member), Knepper,Ross A (committee member).
Subjects/Keywords: Robotics; Machine learning; Deep learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lenz, I. (2016). Deep Learning For Robotics. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/44317
Chicago Manual of Style (16th Edition):
Lenz, Ian. “Deep Learning For Robotics.” 2016. Doctoral Dissertation, Cornell University. Accessed February 27, 2021.
http://hdl.handle.net/1813/44317.
MLA Handbook (7th Edition):
Lenz, Ian. “Deep Learning For Robotics.” 2016. Web. 27 Feb 2021.
Vancouver:
Lenz I. Deep Learning For Robotics. [Internet] [Doctoral dissertation]. Cornell University; 2016. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1813/44317.
Council of Science Editors:
Lenz I. Deep Learning For Robotics. [Doctoral Dissertation]. Cornell University; 2016. Available from: http://hdl.handle.net/1813/44317

University of KwaZulu-Natal
11.
Govender, Lishen.
Determination of quantum entanglement concurrence using multilayer perceptron neural networks.
Degree: 2017, University of KwaZulu-Natal
URL: http://hdl.handle.net/10413/15713
► Artificial Neural Networks, inspired by biological neural networks, have seen widespread implementations across all research areas in the past few years. This partly due to…
(more)
▼ Artificial Neural Networks, inspired by biological neural networks, have seen widespread
implementations across all research areas in the past few years. This partly due to recent
developments in the field and mostly due to the increased accessibility of hardware
and cloud computing capable of realising artificial neural network models. As
the implementation of neural networks and
deep learning in general becomes more
ubiquitous in everyday life, we seek to leverage this powerful tool to aid in furthering
research in quantum information science.
Concurrence is a measure of entanglement that quantifies the "amount" of entanglement
contained within both pure and mixed state entangled systems [1]. In this thesis,
artificial neural networks are used to determine models that predict concurrence, particularly,
models are trained on mixed state inputs and used for pure state prediction.
Conversely additional models are trained on pure state inputs and used for mixed state
prediction. An overview of the prediction performance is presented along with analysis
of the predictions.
Advisors/Committee Members: Petruccione, Francesco. (advisor), Sinayskiy, Llya. (advisor).
Subjects/Keywords: Deep learning.; Machine learning.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Govender, L. (2017). Determination of quantum entanglement concurrence using multilayer perceptron neural networks. (Thesis). University of KwaZulu-Natal. Retrieved from http://hdl.handle.net/10413/15713
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Govender, Lishen. “Determination of quantum entanglement concurrence using multilayer perceptron neural networks.” 2017. Thesis, University of KwaZulu-Natal. Accessed February 27, 2021.
http://hdl.handle.net/10413/15713.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Govender, Lishen. “Determination of quantum entanglement concurrence using multilayer perceptron neural networks.” 2017. Web. 27 Feb 2021.
Vancouver:
Govender L. Determination of quantum entanglement concurrence using multilayer perceptron neural networks. [Internet] [Thesis]. University of KwaZulu-Natal; 2017. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10413/15713.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Govender L. Determination of quantum entanglement concurrence using multilayer perceptron neural networks. [Thesis]. University of KwaZulu-Natal; 2017. Available from: http://hdl.handle.net/10413/15713
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Princeton University
12.
Ravi, Sachin.
Meta-Learning for Data and Processing Efficiency
.
Degree: PhD, 2019, Princeton University
URL: http://arks.princeton.edu/ark:/88435/dsp013j333513x
► Deep learning models have shown great success in a variety of machine learning benchmarks; however, these models still lack the efficiency and flexibility of humans.…
(more)
▼ Deep learning models have shown great success in a variety of machine
learning benchmarks; however, these models still lack the efficiency and flexibility of humans. Current
deep learning methods involve training on a large amount of data to produce a model that can then specialize to the specific task encoded by the training data. Humans, on the other hand, are able to learn new concepts throughout our lives with comparatively little feedback. In order to bridge this gap, previous work has suggested the use of meta-
learning. Rather than
learning how to do a specific task, meta-
learning involves
learning how-to-learn and utilizing this knowledge to learn new tasks more effectively. This thesis focuses on using meta-
learning to improve the data and processing efficiency of
deep learning models when
learning new tasks.
First, we discuss a meta-
learning model for the few-shot
learning problem, where the aim is to learn a new classification task having unseen classes with few labeled examples. We use a LSTM-based meta-learner model to learn both the initialization and the optimization algorithm used to train another neural network and show that our method compares favorably to nearest-neighbor approaches. The second part of the thesis deals with improving the predictive uncertainty of models in the few-shot
learning setting. Using a Bayesian perspective, we propose a meta-
learning method which efficiently amortizes hierarchical variational inference across tasks,
learning a prior distribution over neural network weights so that a few steps of gradient descent will produce a good task-specific approximate posterior. Finally, we focus on applying meta-
learning in the context of making choices that impact processing efficacy. When training a network on multiple tasks, we have a choice between interactive parallelism (training on different tasks one after another) and independent parallelism (using the network to process multiple tasks concurrently). For the simulation environment considered, we show that there is a trade-off between these two types of processing choices in
deep neural networks. We then discuss a meta-
learning algorithm for an agent to learn how to train itself with regard to this trade-off in an environment with unknown serialization cost.
Advisors/Committee Members: Li, Kai (advisor).
Subjects/Keywords: Deep Learning;
Meta-Learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ravi, S. (2019). Meta-Learning for Data and Processing Efficiency
. (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp013j333513x
Chicago Manual of Style (16th Edition):
Ravi, Sachin. “Meta-Learning for Data and Processing Efficiency
.” 2019. Doctoral Dissertation, Princeton University. Accessed February 27, 2021.
http://arks.princeton.edu/ark:/88435/dsp013j333513x.
MLA Handbook (7th Edition):
Ravi, Sachin. “Meta-Learning for Data and Processing Efficiency
.” 2019. Web. 27 Feb 2021.
Vancouver:
Ravi S. Meta-Learning for Data and Processing Efficiency
. [Internet] [Doctoral dissertation]. Princeton University; 2019. [cited 2021 Feb 27].
Available from: http://arks.princeton.edu/ark:/88435/dsp013j333513x.
Council of Science Editors:
Ravi S. Meta-Learning for Data and Processing Efficiency
. [Doctoral Dissertation]. Princeton University; 2019. Available from: http://arks.princeton.edu/ark:/88435/dsp013j333513x

Princeton University
13.
Ravi, Sachin.
Meta-Learning for Data and Processing Efficiency
.
Degree: PhD, 2019, Princeton University
URL: http://arks.princeton.edu/ark:/88435/dsp01ns064891r
► Deep learning models have shown great success in a variety of machine learning benchmarks; however, these models still lack the efficiency and flexibility of humans.…
(more)
▼ Deep learning models have shown great success in a variety of machine
learning benchmarks; however, these models still lack the efficiency and flexibility of humans. Current
deep learning methods involve training on a large amount of data to produce a model that can then specialize to the specific task encoded by the training data. Humans, on the other hand, are able to learn new concepts throughout our lives with comparatively little feedback. In order to bridge this gap, previous work has suggested the use of meta-
learning. Rather than
learning how to do a specific task, meta-
learning involves
learning how-to-learn and utilizing this knowledge to learn new tasks more effectively. This thesis focuses on using meta-
learning to improve the data and processing efficiency of
deep learning models when
learning new tasks.
First, we discuss a meta-
learning model for the few-shot
learning problem, where the aim is to learn a new classification task having unseen classes with few labeled examples. We use a LSTM-based meta-learner model to learn both the initialization and the optimization algorithm used to train another neural network and show that our method compares favorably to nearest-neighbor approaches. The second part of the thesis deals with improving the predictive uncertainty of models in the few-shot
learning setting. Using a Bayesian perspective, we propose a meta-
learning method which efficiently amortizes hierarchical variational inference across tasks,
learning a prior distribution over neural network weights so that a few steps of gradient descent will produce a good task-specific approximate posterior. Finally, we focus on applying meta-
learning in the context of making choices that impact processing efficacy. When training a network on multiple tasks, we have a choice between interactive parallelism (training on different tasks one after another) and independent parallelism (using the network to process multiple tasks concurrently). For the simulation environment considered, we show that there is a trade-off between these two types of processing choices in
deep neural networks. We then discuss a meta-
learning algorithm for an agent to learn how to train itself with regard to this trade-off in an environment with unknown serialization cost.
Advisors/Committee Members: Li, Kai (advisor).
Subjects/Keywords: Deep Learning;
Meta-Learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ravi, S. (2019). Meta-Learning for Data and Processing Efficiency
. (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01ns064891r
Chicago Manual of Style (16th Edition):
Ravi, Sachin. “Meta-Learning for Data and Processing Efficiency
.” 2019. Doctoral Dissertation, Princeton University. Accessed February 27, 2021.
http://arks.princeton.edu/ark:/88435/dsp01ns064891r.
MLA Handbook (7th Edition):
Ravi, Sachin. “Meta-Learning for Data and Processing Efficiency
.” 2019. Web. 27 Feb 2021.
Vancouver:
Ravi S. Meta-Learning for Data and Processing Efficiency
. [Internet] [Doctoral dissertation]. Princeton University; 2019. [cited 2021 Feb 27].
Available from: http://arks.princeton.edu/ark:/88435/dsp01ns064891r.
Council of Science Editors:
Ravi S. Meta-Learning for Data and Processing Efficiency
. [Doctoral Dissertation]. Princeton University; 2019. Available from: http://arks.princeton.edu/ark:/88435/dsp01ns064891r

University of Illinois – Urbana-Champaign
14.
Deshpande, Ishan.
Generative modeling using the sliced Wasserstein distance.
Degree: MS, Electrical & Computer Engr, 2018, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/100951
► Generative adversarial nets (GANs) are very successful at modeling distributions from given samples, even in the high-dimensional case. However, their formulation is also known to…
(more)
▼ Generative adversarial nets (GANs) are very successful at modeling distributions from given samples, even in the high-dimensional case. However, their formulation is also known to be hard to optimize and often unstable. While the aforementioned problems are particularly true for early GAN formulations, there has been significant empirically motivated and theoretically founded progress to improve stability, for instance, by using the Wasserstein distance rather than the Jenson-Shannon divergence. Here, we consider an alternative formulation for generative modeling based on random projections which, in its simplest form, results in a single objective rather than a saddlepoint formulation. By augmenting this approach with a discriminator we improve its accuracy. We found our approach to be significantly more stable compared to even the improved Wasserstein GAN. Further, unlike the traditional GAN loss, the loss formulated in our method is a good measure of the actual distance between the distributions and, for the first time for GAN training, we are able to show estimates for the same.
Advisors/Committee Members: Schwing, Alexander G (advisor).
Subjects/Keywords: Machine Learning; Deep Learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Deshpande, I. (2018). Generative modeling using the sliced Wasserstein distance. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/100951
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Deshpande, Ishan. “Generative modeling using the sliced Wasserstein distance.” 2018. Thesis, University of Illinois – Urbana-Champaign. Accessed February 27, 2021.
http://hdl.handle.net/2142/100951.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Deshpande, Ishan. “Generative modeling using the sliced Wasserstein distance.” 2018. Web. 27 Feb 2021.
Vancouver:
Deshpande I. Generative modeling using the sliced Wasserstein distance. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2018. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/2142/100951.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Deshpande I. Generative modeling using the sliced Wasserstein distance. [Thesis]. University of Illinois – Urbana-Champaign; 2018. Available from: http://hdl.handle.net/2142/100951
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Illinois – Urbana-Champaign
15.
Liu, Jialin.
Machine learning workflow optimization via automatic discovery of resource reuse opportunities.
Degree: MS, Computer Science, 2019, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/104894
► Many state-of-the-art deep learning models rely on dynamic computation logic, making them difficult to optimize. In this thesis, we present a hashing based algorithm that…
(more)
▼ Many state-of-the-art
deep learning models rely on dynamic computation logic, making them difficult to optimize. In this thesis, we present a hashing based algorithm that is able to detect and optimize computation logic common to different computation graphs. We show that our algorithm can be integrated seamlessly into popular
deep learning frameworks such as TensorFlow, with nearly zero code changes required on the part of users in order to adapt our optimizations to their programs. Experiments show that our algorithm achieves 1.35× speedup on a sentiment classification task trained with the popular Tree-LSTM model.
Advisors/Committee Members: Parameswaran, Aditya (advisor).
Subjects/Keywords: Machine Learning; Deep Learning; System
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, J. (2019). Machine learning workflow optimization via automatic discovery of resource reuse opportunities. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/104894
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Liu, Jialin. “Machine learning workflow optimization via automatic discovery of resource reuse opportunities.” 2019. Thesis, University of Illinois – Urbana-Champaign. Accessed February 27, 2021.
http://hdl.handle.net/2142/104894.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Liu, Jialin. “Machine learning workflow optimization via automatic discovery of resource reuse opportunities.” 2019. Web. 27 Feb 2021.
Vancouver:
Liu J. Machine learning workflow optimization via automatic discovery of resource reuse opportunities. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2019. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/2142/104894.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Liu J. Machine learning workflow optimization via automatic discovery of resource reuse opportunities. [Thesis]. University of Illinois – Urbana-Champaign; 2019. Available from: http://hdl.handle.net/2142/104894
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

California State University – Sacramento
16.
Poosarla, Akshay.
Bone age prediction with convolutional neural networks.
Degree: MS, Computer Science, 2019, California State University – Sacramento
URL: http://hdl.handle.net/10211.3/207660
► Skeletal bone age assessment is a common clinical practice to analyze and assess the biological maturity of pediatric patients. This process generally involves taking X-ray…
(more)
▼ Skeletal bone age assessment is a common clinical practice to analyze and assess the biological maturity of pediatric patients. This process generally involves taking X-ray of the left hand, along with fingers and wrist, then followed by image analysis. The current process involves manually comparing the radiological scan of this X-ray with the standard reference images and estimating the skeletal age. The analysis is crucial in determining if a child is prone to some disease.
This current manual process is very time consuming and has high probabilities of misjudgment in predicting the skeletal age. However, recent developments in the field of neural networks provide an opportunity to automate this process. In this project, we are using convolutional neural network methods and image processing techniques to fully automate the process of predicting the skeletal bone age of a patient from the given X-ray images.
Radiological Society of North America has collected a dataset with 12600 different hand images of boys and girls from Colorado children???s hospital and Stanford children???s hospital and provided for research purposes. This dataset is used to train and build a convolutional neural network model in this study. Each image in the dataset is a complete image of a left-hand wrist and a CSV file containing the corresponding age in months and gender.
The purpose of this project is to automate the current manual process and develop a tool that helps doctors and act as a decision support system in predicting the skeletal age. Along with this, we are developing a user-friendly and highly available online system which helps the doctors in predicting the bone age accurately.
Advisors/Committee Members: Cheng, Yuan.
Subjects/Keywords: Machine learning; Deep learning; Boneage
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Poosarla, A. (2019). Bone age prediction with convolutional neural networks. (Masters Thesis). California State University – Sacramento. Retrieved from http://hdl.handle.net/10211.3/207660
Chicago Manual of Style (16th Edition):
Poosarla, Akshay. “Bone age prediction with convolutional neural networks.” 2019. Masters Thesis, California State University – Sacramento. Accessed February 27, 2021.
http://hdl.handle.net/10211.3/207660.
MLA Handbook (7th Edition):
Poosarla, Akshay. “Bone age prediction with convolutional neural networks.” 2019. Web. 27 Feb 2021.
Vancouver:
Poosarla A. Bone age prediction with convolutional neural networks. [Internet] [Masters thesis]. California State University – Sacramento; 2019. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10211.3/207660.
Council of Science Editors:
Poosarla A. Bone age prediction with convolutional neural networks. [Masters Thesis]. California State University – Sacramento; 2019. Available from: http://hdl.handle.net/10211.3/207660

University of Oxford
17.
Berrada, Leonard.
Leveraging structure for optimization in deep learning.
Degree: PhD, 2019, University of Oxford
URL: http://ora.ox.ac.uk/objects/uuid:79360a95-a6e0-4acc-ba3a-07598f52ea39
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.820693
► In the past decade, neural networks have demonstrated impressive performance in supervised learning. They now power many applications ranging from real- time medical diagnosis to…
(more)
▼ In the past decade, neural networks have demonstrated impressive performance in supervised learning. They now power many applications ranging from real- time medical diagnosis to human-sounding virtual assistants through wild animal monitoring. Despite their increasing importance however, they remain difficult to train due to a complex interplay between the learning objective, the optimization algorithm and generalization performance. Indeed, using different loss functions and optimization algorithms lead to trained models with significantly different performances on unseen data. In this thesis, we focus first on the loss function, for which using a task-specific approach can improve the generalization performance in the small or noisy data setting. Specifically, we consider the top-k classification setting. We show that traditional piecewise-linear top-k loss functions require smoothing to work well with neural networks. However, it is computationally challenging to evaluate the resulting smoothed loss function and its gradient. Indeed, using a naive approach would result in a runtime proportional to the number of combinations of possible k-predictions. Thanks to a connection to polynomial algebra, we develop computationally efficient algorithms to evaluate the smoothed loss function and its gradient. This allows us to train models with stochastic gradient descent (SGD) using the smooth top-k loss function. We show that doing so is more robust to over-fitting than using the standard cross-entropy loss function. Second, we turn our attention to optimization algorithms. Indeed, while SGD empirically provides good generalization, it requires a manually tuned learning-rate schedule. Obtaining a suitable learning-rate schedule for a given network and data set is a time-consuming and computationally expensive task. In this thesis, we propose novel optimization algorithms to alleviate this issue. In particular, we propose to exploit the structure in three different ways – each one leading to a new optimization algorithm. First, we exploit the piecewise linearity of the activation and loss functions, which results in a difference-of-convex programming approach. Second, we use the compositionality of the model and the loss function with the help of a proximal approach. Third, we exploit the property of interpolating models to derive an adaptive learning-rate for SGD. Empirically, we compare the performance of the three algorithms on various deep learning tasks, and we demonstrate their advantages over state-of-the-art methods while avoiding the need for manual learning rate schedules.
Subjects/Keywords: optimization; machine learning; deep learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Berrada, L. (2019). Leveraging structure for optimization in deep learning. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:79360a95-a6e0-4acc-ba3a-07598f52ea39 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.820693
Chicago Manual of Style (16th Edition):
Berrada, Leonard. “Leveraging structure for optimization in deep learning.” 2019. Doctoral Dissertation, University of Oxford. Accessed February 27, 2021.
http://ora.ox.ac.uk/objects/uuid:79360a95-a6e0-4acc-ba3a-07598f52ea39 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.820693.
MLA Handbook (7th Edition):
Berrada, Leonard. “Leveraging structure for optimization in deep learning.” 2019. Web. 27 Feb 2021.
Vancouver:
Berrada L. Leveraging structure for optimization in deep learning. [Internet] [Doctoral dissertation]. University of Oxford; 2019. [cited 2021 Feb 27].
Available from: http://ora.ox.ac.uk/objects/uuid:79360a95-a6e0-4acc-ba3a-07598f52ea39 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.820693.
Council of Science Editors:
Berrada L. Leveraging structure for optimization in deep learning. [Doctoral Dissertation]. University of Oxford; 2019. Available from: http://ora.ox.ac.uk/objects/uuid:79360a95-a6e0-4acc-ba3a-07598f52ea39 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.820693

Australian National University
18.
Dong, Cong.
Spatial Deep Networks for Outdoor Scene Classification
.
Degree: 2015, Australian National University
URL: http://hdl.handle.net/1885/101712
► Scene classification has become an increasingly popular topic in computer vision. The techniques for scene classification can be widely used in many other aspects, such…
(more)
▼ Scene classification has become an increasingly popular topic in computer vision.
The techniques for scene classification can be widely used in many other aspects,
such as detection, action recognition, and content-based image retrieval. Recently,
the stationary property of images has been leveraged in conjunction with convolutional
networks to perform classification tasks. In the existing approach, one
random patch is extracted from each training image to learn filters for convolutional
processes. However, feature learning only from one random patch per image
is not robust because patches selected from di↵erent areas of an image may contain
distinct scene objects which make the features of these patches have di↵erent
descriptive power. In this dissertation, focusing on deep learning techniques, we
propose a multi-scale network that utilizes multiple random patches and di↵erent
patch dimensions to learn feature representations for images in order to improve
the existing approach.
Despite the much better performance the multi-scale network can achieve than
the existing approach, lacking of local features and the spatial layout is one of
the core limitations of both methods. Therefore, we propose a novel Spatial Deep
Network (SDN) to further enhance the existing approach by exploiting the spatial
layout of the image and constraining the random patch extraction to be performed
in di↵erent areas of the image so as to e↵ectively restrict the patches to hold the
necessary characteristics of di↵erent image areas. In this way, SDN yields compact
but discriminative features that incorporate both global descriptors and the local
spatial information for images. Experiment results show that SDN considerably
exceeds the existing approach and multi-scale networks and achieves competitive
performance with some widely used classification techniques on the OT dataset
(developed by Oliva and Torralba). In order to evaluate the robustness of the
proposed SDN, we also apply it to the content-based image retrieval on the Holidays
dataset, where our features attain much better retrieval performance but have much
lower feature dimensions compared to other state-of-the-art feature descriptors.
Subjects/Keywords: Deep Learning;
Scene Classification;
Spatial Deep Networks
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Dong, C. (2015). Spatial Deep Networks for Outdoor Scene Classification
. (Thesis). Australian National University. Retrieved from http://hdl.handle.net/1885/101712
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Dong, Cong. “Spatial Deep Networks for Outdoor Scene Classification
.” 2015. Thesis, Australian National University. Accessed February 27, 2021.
http://hdl.handle.net/1885/101712.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Dong, Cong. “Spatial Deep Networks for Outdoor Scene Classification
.” 2015. Web. 27 Feb 2021.
Vancouver:
Dong C. Spatial Deep Networks for Outdoor Scene Classification
. [Internet] [Thesis]. Australian National University; 2015. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1885/101712.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Dong C. Spatial Deep Networks for Outdoor Scene Classification
. [Thesis]. Australian National University; 2015. Available from: http://hdl.handle.net/1885/101712
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Texas – Austin
19.
Hausknecht, Matthew John.
Cooperation and communication in multiagent deep reinforcement learning.
Degree: PhD, Computer science, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/45681
► Reinforcement learning is the area of machine learning concerned with learning which actions to execute in an unknown environment in order to maximize cumulative reward.…
(more)
▼ Reinforcement
learning is the area of machine
learning concerned with
learning which actions to execute in an unknown environment in order to maximize cumulative reward. As agents begin to perform tasks of genuine interest to humans, they will be faced with environments too complex for humans to predetermine the correct actions using hand-designed solutions. Instead, capable
learning agents will be necessary to tackle complex real-world domains. However, traditional reinforcement
learning algorithms have difficulty with domains featuring 1) high-dimensional continuous state spaces, for example pixels from a camera image, 2) high-dimensional parameterized-continuous action spaces, 3) partial observability, and 4) multiple independent
learning agents. We hypothesize that
deep neural networks hold the key to scaling reinforcement
learning towards complex tasks. This thesis seeks to answer the following two-part question: 1) How can the power of
Deep Neural Networks be leveraged to extend Reinforcement
Learning to complex environments featuring partial observability, high-dimensional parameterized-continuous state and action spaces, and sparse rewards? 2) How can multiple
Deep Reinforcement
Learning agents learn to cooperate in a multiagent setting? To address the first part of this question, this thesis explores the idea of using recurrent neural networks to combat partial observability experienced by agents in the domain of Atari 2600 video games. Next, we design a
deep reinforcement
learning agent capable of discovering effective policies for the parameterized-continuous action space found in the Half Field Offense simulated soccer domain. To address the second part of this question, this thesis investigates architectures and algorithms suited for cooperative multiagent
learning. We demonstrate that sharing parameters and memories between
deep reinforcement
learning agents fosters policy similarity, which can result in cooperative behavior. Additionally, we hypothesize that communication can further aid cooperation, and we present the Grounded Semantic Network (GSN), which learns a communication protocol grounded in the observation space and reward function of the task. In general, we find that the GSN is effective on domains featuring partial observability and asymmetric information. All in all, this thesis demonstrates that reinforcement
learning combined with
deep neural network function approximation can produce algorithms capable of discovering effective policies for domains with partial observability, parameterized-continuous actions spaces, and sparse rewards. Additionally, we demonstrate that single agent
deep reinforcement
learning algorithms can be naturally extended towards cooperative multiagent tasks featuring learned communication. These results represent a non-trivial step towards extending agent-based AI towards complex environments.
Advisors/Committee Members: Stone, Peter, 1971- (advisor), Ballard, Dana (committee member), Mooney, Ray (committee member), Miikkulainen, Risto (committee member), Singh, Satinder (committee member).
Subjects/Keywords: Reinforcement learning; Deep learning; Multiagent learning; Cooperation; Communication; RoboCup; POMDP; Deep reinforcement learning; Deep RL
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hausknecht, M. J. (2016). Cooperation and communication in multiagent deep reinforcement learning. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/45681
Chicago Manual of Style (16th Edition):
Hausknecht, Matthew John. “Cooperation and communication in multiagent deep reinforcement learning.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed February 27, 2021.
http://hdl.handle.net/2152/45681.
MLA Handbook (7th Edition):
Hausknecht, Matthew John. “Cooperation and communication in multiagent deep reinforcement learning.” 2016. Web. 27 Feb 2021.
Vancouver:
Hausknecht MJ. Cooperation and communication in multiagent deep reinforcement learning. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/2152/45681.
Council of Science Editors:
Hausknecht MJ. Cooperation and communication in multiagent deep reinforcement learning. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/45681

University of Waterloo
20.
Gaurav, Ashish.
Safety-Oriented Stability Biases for Continual Learning.
Degree: 2020, University of Waterloo
URL: http://hdl.handle.net/10012/15579
► Continual learning is often confounded by “catastrophic forgetting” that prevents neural networks from learning tasks sequentially. In the case of real world classification systems that…
(more)
▼ Continual learning is often confounded by “catastrophic forgetting” that prevents neural networks from learning tasks sequentially. In the case of real world classification systems that are safety-validated prior to deployment, it is essential to ensure that validated knowledge is retained. We propose methods that build on existing unconstrained continual learning solutions, which increase the model variance or weaken the model bias to better retain more of the existing knowledge.
We investigate multiple such strategies, both for continual classification as well as continual reinforcement learning. Finally, we demonstrate the improved performance of our methods against popular continual learning approaches, using variants of standard image classification datasets, as well as assess the effect of weaker biases in continual reinforcement learning.
Subjects/Keywords: deep learning; continual learning; classification; reinforcement learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gaurav, A. (2020). Safety-Oriented Stability Biases for Continual Learning. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/15579
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Gaurav, Ashish. “Safety-Oriented Stability Biases for Continual Learning.” 2020. Thesis, University of Waterloo. Accessed February 27, 2021.
http://hdl.handle.net/10012/15579.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Gaurav, Ashish. “Safety-Oriented Stability Biases for Continual Learning.” 2020. Web. 27 Feb 2021.
Vancouver:
Gaurav A. Safety-Oriented Stability Biases for Continual Learning. [Internet] [Thesis]. University of Waterloo; 2020. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10012/15579.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Gaurav A. Safety-Oriented Stability Biases for Continual Learning. [Thesis]. University of Waterloo; 2020. Available from: http://hdl.handle.net/10012/15579
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Guelph
21.
Im, Jiwoong.
Analyzing Unsupervised Representation Learning Models Under the View of Dynamical Systems.
Degree: Master of Applied Science, School of Engineering, 2015, University of Guelph
URL: https://atrium.lib.uoguelph.ca/xmlui/handle/10214/8809
► The objective of this thesis is to take the dynamical systems approach to understand the unsupervised learning models and learning algorithms. Gated auto-encoders (GAEs) are…
(more)
▼ The objective of this thesis is to take the dynamical systems approach to understand the unsupervised
learning models and
learning algorithms. Gated auto-encoders (GAEs) are an interesting and flexible extension of auto-encoders which can learn transformations among different images or pixel covariances within images. We examine the GAEs' ability to represent different functions or data distributions. We apply a dynamical systems view to GAEs, deriving a scoring function, and drawing connections to RBMs. In the second part of our study, we investigate the performance of Minimum Probability Flow (MPF)
learning for training restricted Boltzmann machines (RBMs). MPF proposes a tractable, consistent, objective function defined in terms of a Taylor expansion of the KL divergence with respect to sampling dynamics. We propose a more general form for the sampling dynamics in MPF, and explore the consequences of different choices for these dynamics for training RBMs.
Advisors/Committee Members: Taylor, Graham W (advisor).
Subjects/Keywords: Machine learning; Deep Learning; unsupervised learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Im, J. (2015). Analyzing Unsupervised Representation Learning Models Under the View of Dynamical Systems. (Masters Thesis). University of Guelph. Retrieved from https://atrium.lib.uoguelph.ca/xmlui/handle/10214/8809
Chicago Manual of Style (16th Edition):
Im, Jiwoong. “Analyzing Unsupervised Representation Learning Models Under the View of Dynamical Systems.” 2015. Masters Thesis, University of Guelph. Accessed February 27, 2021.
https://atrium.lib.uoguelph.ca/xmlui/handle/10214/8809.
MLA Handbook (7th Edition):
Im, Jiwoong. “Analyzing Unsupervised Representation Learning Models Under the View of Dynamical Systems.” 2015. Web. 27 Feb 2021.
Vancouver:
Im J. Analyzing Unsupervised Representation Learning Models Under the View of Dynamical Systems. [Internet] [Masters thesis]. University of Guelph; 2015. [cited 2021 Feb 27].
Available from: https://atrium.lib.uoguelph.ca/xmlui/handle/10214/8809.
Council of Science Editors:
Im J. Analyzing Unsupervised Representation Learning Models Under the View of Dynamical Systems. [Masters Thesis]. University of Guelph; 2015. Available from: https://atrium.lib.uoguelph.ca/xmlui/handle/10214/8809

University of Toronto
22.
Makhzani, Alireza.
Unsupervised Representation Learning with Autoencoders.
Degree: PhD, 2018, University of Toronto
URL: http://hdl.handle.net/1807/89800
► Despite the recent progress in machine learning and deep learning, unsupervised learning still remains a largely unsolved problem. It is widely recognized that unsupervised learning…
(more)
▼ Despite the recent progress in machine
learning and
deep learning, unsupervised
learning still remains a largely unsolved problem. It is widely recognized that unsupervised
learning algorithms that can learn useful representations are needed for solving problems with limited label information. In this thesis, we study the problem of
learning unsupervised representations using autoencoders, and propose regularization techniques that enable autoencoders to learn useful representations of data in unsupervised and semi-supervised settings. First, we exploit sparsity as a generic prior on the representations of autoencoders and propose sparse autoencoders that can learn sparse representations with very fast inference processes, making them well-suited to large problem sizes where conventional sparse coding algorithms cannot be applied. Next, we study autoencoders from a probabilistic perspective and propose generative autoencoders that use a generative adversarial network (GAN) to match the distribution of the latent code of the autoencoder with a pre-defined prior. We show that these generative autoencoders can learn posterior approximations that are more expressive than tractable densities often used in variational inference. We demonstrate the performance of developed methods of this thesis on real world image datasets and show their applications in generative modeling, clustering, semi-supervised classification and dimensionality reduction.
Advisors/Committee Members: Frey, Brendan, Electrical and Computer Engineering.
Subjects/Keywords: Deep Learning; Machine Learning; Unsupervised Learning; 0984
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Makhzani, A. (2018). Unsupervised Representation Learning with Autoencoders. (Doctoral Dissertation). University of Toronto. Retrieved from http://hdl.handle.net/1807/89800
Chicago Manual of Style (16th Edition):
Makhzani, Alireza. “Unsupervised Representation Learning with Autoencoders.” 2018. Doctoral Dissertation, University of Toronto. Accessed February 27, 2021.
http://hdl.handle.net/1807/89800.
MLA Handbook (7th Edition):
Makhzani, Alireza. “Unsupervised Representation Learning with Autoencoders.” 2018. Web. 27 Feb 2021.
Vancouver:
Makhzani A. Unsupervised Representation Learning with Autoencoders. [Internet] [Doctoral dissertation]. University of Toronto; 2018. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1807/89800.
Council of Science Editors:
Makhzani A. Unsupervised Representation Learning with Autoencoders. [Doctoral Dissertation]. University of Toronto; 2018. Available from: http://hdl.handle.net/1807/89800

University of Illinois – Urbana-Champaign
23.
Benson, Christopher Edward.
Improving cache replacement policy using deep reinforcement learning.
Degree: MS, Computer Science, 2018, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/102858
► This thesis explores the use of reinforcement learning approaches to improve replacement policies of caches. In today's internet, caches play a vital role in improving…
(more)
▼ This thesis explores the use of reinforcement
learning approaches to improve replacement policies of caches. In today's internet, caches play a vital role in improving performance of data transfers and load speeds. From video streaming to information retrieval from databases, caches allow applications to function more quickly and efficiently. A cache's replacement policy plays a major role in determining the cache's effectiveness and performance. The replacement policy is an algorithm that chooses which piece of data in the cache should be evicted when the cache becomes full and new elements are requested. In computer systems today, most caches use simple heuristic-based policies. Currently used policies are effective but are still far from optimal. Using more optimal cache replacement policies could dramatically improve internet performance and reduce database costs for many industry-based companies.
This research examines
learning more optimal replacement policies using reinforcement
learning. In reinforcement
learning, an agent learns to take optimal actions given information about an environment and a reward signal. In this work,
deep reinforcement
learning algorithms are trained to learn optimal cache replacement policies using a simulated cache environment and database access traces. This research presents the idea of using index-based cache access histories as input data for the reinforcement
learning algorithms instead of content-based input. Several approaches are explored including value-based algorithms and policy gradient algorithms. The work presented here also explores the idea of using imitation
learning algorithms to mimic optimal cache replacement policies. The algorithms are tested on several different cache sizes and data access patterns to show that these learned policies can outperform currently used replacement policies in a variety of settings.
Advisors/Committee Members: Peng, Jian (advisor).
Subjects/Keywords: Reinforcement Learning; Machine Learning; Deep Learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Benson, C. E. (2018). Improving cache replacement policy using deep reinforcement learning. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/102858
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Benson, Christopher Edward. “Improving cache replacement policy using deep reinforcement learning.” 2018. Thesis, University of Illinois – Urbana-Champaign. Accessed February 27, 2021.
http://hdl.handle.net/2142/102858.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Benson, Christopher Edward. “Improving cache replacement policy using deep reinforcement learning.” 2018. Web. 27 Feb 2021.
Vancouver:
Benson CE. Improving cache replacement policy using deep reinforcement learning. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2018. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/2142/102858.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Benson CE. Improving cache replacement policy using deep reinforcement learning. [Thesis]. University of Illinois – Urbana-Champaign; 2018. Available from: http://hdl.handle.net/2142/102858
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

NSYSU
24.
Lin, Kun-da.
Deep Reinforcement Learning with a Gating Network.
Degree: Master, Electrical Engineering, 2017, NSYSU
URL: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0223117-131536
► Reinforcement Learning (RL) is a good way to train the robot since it doesn't need an exact model of the environment. All need is to…
(more)
▼ Reinforcement
Learning (RL) is a good way to train the robot since it doesn't need an exact model of the environment. All need is to let a
learning agent interact with the en-vironment by an appropriate reward function, which is associated with the goal of the task that the agent is expected to accomplish. Unfortunately, itâs hard to learn a diffi-cult reward function for a complicated problem, such as a soccer player in the game where the goal of scoring is not directly related to the mission or the role the player is asked to play by the coach. Besides, the tabular method for approximation of returns in RL is more suitable for an environment with less states. In a huge state space, RL methods always face the curse of dimensionality. To alleviate those difficulties, this paper proposed an algorithm â a
deep reinforcement
learning method regulated by and gating networks. By the merit of
deep learning neural networks, even regarding pixels in an image as states, the latent features can be trained and implicitly extracted layer by layer from raw data. In the proposed method, a composed policy can be obtained by a gating network which regulates the outputs from several
deep learning modules, each of which is trained for an individual policy. In this thesis, two video games, flappy bird, and ping-pong is adopted as the testbeds to examinate the performance of the proposed method. In the proposed architecture, each policy module of
deep learning is trained by a simple reward functions first. By the gating networks, these simple policies can be composed into a more sophisticated one, so as to accommodate with more complicated tasks. This is akin to the divide-and-conquer strategy. The proposed architecture has two kinds of arrangements and structures. One is called the in-parallel gating network, and the other is called the in-serial network. From the outcomes, itâs observed that both can efficiently shorten the training time.
Advisors/Committee Members: Ming-Yi Ju (chair), Yu-Jen Chen (chair), Kao-Shing Hwang (committee member), Jin-Ling Lin (chair).
Subjects/Keywords: Reinforcement Learning; Deep Reinforcement Learning; Deep Learning; Gating network; Neural network
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lin, K. (2017). Deep Reinforcement Learning with a Gating Network. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0223117-131536
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Lin, Kun-da. “Deep Reinforcement Learning with a Gating Network.” 2017. Thesis, NSYSU. Accessed February 27, 2021.
http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0223117-131536.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Lin, Kun-da. “Deep Reinforcement Learning with a Gating Network.” 2017. Web. 27 Feb 2021.
Vancouver:
Lin K. Deep Reinforcement Learning with a Gating Network. [Internet] [Thesis]. NSYSU; 2017. [cited 2021 Feb 27].
Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0223117-131536.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Lin K. Deep Reinforcement Learning with a Gating Network. [Thesis]. NSYSU; 2017. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0223117-131536
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Rochester Institute of Technology
25.
Lamos-Sweeney, Joshua.
Deep learning using genetic algorithms.
Degree: Computer Science (GCCIS), 2012, Rochester Institute of Technology
URL: https://scholarworks.rit.edu/theses/254
► Deep Learning networks are a new type of neural network that discovers important object features. These networks determine features without supervision, and are adept at…
(more)
▼ Deep Learning networks are a new type of neural network that discovers important object features. These networks determine features without supervision, and are adept at
learning high level abstractions about their data sets. These networks are useful for a variety of tasks, but are difficult to train. This difficulty is compounded when multiple networks are trained in a layered fashion, which results in increased solution complexity as well as increased training time.
This paper examines the use of Genetic Algorithms as a training mechanism for
Deep Learning networks, with emphasis on training networks with a large number of layers, each of which is trained independently to reduce the computational burden and increase the overall flexibility of the algorithm.
This paper covers the implementation of a multilayer
deep learning network using a genetic algorithm, including tuning the genetic algorithm, as well as results of experiments involving data compression and object classification. This paper aims to show that a genetic algorithm can be used to train a non trivial
deep learning network in place of existing methodologies for network training, and that the features extracted can be used for a variety of real world computational problems.
Advisors/Committee Members: Gaborski, Roger.
Subjects/Keywords: Deep learning; Genetic algorithms
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lamos-Sweeney, J. (2012). Deep learning using genetic algorithms. (Thesis). Rochester Institute of Technology. Retrieved from https://scholarworks.rit.edu/theses/254
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Lamos-Sweeney, Joshua. “Deep learning using genetic algorithms.” 2012. Thesis, Rochester Institute of Technology. Accessed February 27, 2021.
https://scholarworks.rit.edu/theses/254.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Lamos-Sweeney, Joshua. “Deep learning using genetic algorithms.” 2012. Web. 27 Feb 2021.
Vancouver:
Lamos-Sweeney J. Deep learning using genetic algorithms. [Internet] [Thesis]. Rochester Institute of Technology; 2012. [cited 2021 Feb 27].
Available from: https://scholarworks.rit.edu/theses/254.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Lamos-Sweeney J. Deep learning using genetic algorithms. [Thesis]. Rochester Institute of Technology; 2012. Available from: https://scholarworks.rit.edu/theses/254
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Rochester Institute of Technology
26.
Makhija, Sidharth.
Graph Networks for Multi-Label Image Recognition.
Degree: MS, Computer Engineering, 2020, Rochester Institute of Technology
URL: https://scholarworks.rit.edu/theses/10541
► Providing machines with a robust visualization of multiple objects in a scene has a myriad of applications in the physical world. This research solves…
(more)
▼ Providing machines with a robust visualization of multiple objects in a scene has a myriad of applications in the physical world. This research solves the task of multi-label image recognition using a
deep learning approach. For most multi-label image recognition datasets, there are multiple objects within a single image and a single label can be seen many times throughout the dataset. Therefore, it is not efficient to classify each object in isolation, rather it is important to infer the inter-dependencies between the labels. To extract a latent representation of the pixels from an image, this work uses a convolutional network approach evaluating three different image feature extraction networks. In order to learn the label inter-dependencies, this work proposes a graph convolution network approach as compared to previous approaches such as probabilistic graph or recurrent neural networks. In the graph neural network approach, the image labels are first encoded into word embeddings. These serve as nodes on a graph. The correlations between these nodes are learned using graph neural networks. We investigate how to create the adjacency matrix without manual calculation of the label correlations in the respective datasets. This proposed approach is evaluated on the widely-used PASCAL VOC, MSCOCO, and NUS-WIDE multi-label image recognition datasets. The main evaluation metrics used will be mean average precision and overall F1 score, to show that the learned adjacency matrix method for labels along with the addition of visual attention for image features is able to achieve similar performance to manually calculating the label adjacency matrix.
Advisors/Committee Members: Raymond Ptucha.
Subjects/Keywords: Convolution; Deep learning; Graph network
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Makhija, S. (2020). Graph Networks for Multi-Label Image Recognition. (Masters Thesis). Rochester Institute of Technology. Retrieved from https://scholarworks.rit.edu/theses/10541
Chicago Manual of Style (16th Edition):
Makhija, Sidharth. “Graph Networks for Multi-Label Image Recognition.” 2020. Masters Thesis, Rochester Institute of Technology. Accessed February 27, 2021.
https://scholarworks.rit.edu/theses/10541.
MLA Handbook (7th Edition):
Makhija, Sidharth. “Graph Networks for Multi-Label Image Recognition.” 2020. Web. 27 Feb 2021.
Vancouver:
Makhija S. Graph Networks for Multi-Label Image Recognition. [Internet] [Masters thesis]. Rochester Institute of Technology; 2020. [cited 2021 Feb 27].
Available from: https://scholarworks.rit.edu/theses/10541.
Council of Science Editors:
Makhija S. Graph Networks for Multi-Label Image Recognition. [Masters Thesis]. Rochester Institute of Technology; 2020. Available from: https://scholarworks.rit.edu/theses/10541

San Jose State University
27.
Gopalakrishnan Elango, Sindhuja.
Convolutional Neural Network Acceleration on GPU by Exploiting Data Reuse.
Degree: MS, Computer Engineering, 2017, San Jose State University
URL: https://doi.org/10.31979/etd.9b4r-na7x
;
https://scholarworks.sjsu.edu/etd_theses/4800
► Graphical processing units (GPUs) achieve high throughput with hundreds of cores for concurrent execution and a large register file for storing the context of…
(more)
▼ Graphical processing units (GPUs) achieve high throughput with hundreds of cores for concurrent execution and a large register file for storing the context of thousands of threads. Deep learning algorithms have recently gained popularity for their capability for solving complex problems without programmer intervention. Deep learning algorithms operate with a massive amount of input data that causes high memory access overhead. In the convolutional layer of the deep learning network, there exists a unique pattern of data access and reuse, which is not effectively utilized by the GPU architecture. These abundant redundant memory accesses lead to a significant power and performance overhead. In this thesis, I maintained redundant data in a faster on-chip memory, register file, so that the data that are used by multiple neurons can be directly fetched from the register file without cumbersome system memory accesses. In this method, a neuron’s load instruction is replaced by a shuffle instruction if the data are found from the register file. To enable data sharing in the register file, a new register type was used as a destination register of load instructions. By using the unique ID of the new load destination registers, neurons can easily find their data in the register file. By exploiting the underutilized register file space, this method does not impose any area or power overhead on the register file design. The effectiveness of the new idea was evaluated through exhaustive experiments. According to the results, the new idea significantly improved performance and energy efficiency compared to baseline architecture and shared memory version solution.
Subjects/Keywords: Deep Learning; Energy-efficiency; GPU
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gopalakrishnan Elango, S. (2017). Convolutional Neural Network Acceleration on GPU by Exploiting Data Reuse. (Masters Thesis). San Jose State University. Retrieved from https://doi.org/10.31979/etd.9b4r-na7x ; https://scholarworks.sjsu.edu/etd_theses/4800
Chicago Manual of Style (16th Edition):
Gopalakrishnan Elango, Sindhuja. “Convolutional Neural Network Acceleration on GPU by Exploiting Data Reuse.” 2017. Masters Thesis, San Jose State University. Accessed February 27, 2021.
https://doi.org/10.31979/etd.9b4r-na7x ; https://scholarworks.sjsu.edu/etd_theses/4800.
MLA Handbook (7th Edition):
Gopalakrishnan Elango, Sindhuja. “Convolutional Neural Network Acceleration on GPU by Exploiting Data Reuse.” 2017. Web. 27 Feb 2021.
Vancouver:
Gopalakrishnan Elango S. Convolutional Neural Network Acceleration on GPU by Exploiting Data Reuse. [Internet] [Masters thesis]. San Jose State University; 2017. [cited 2021 Feb 27].
Available from: https://doi.org/10.31979/etd.9b4r-na7x ; https://scholarworks.sjsu.edu/etd_theses/4800.
Council of Science Editors:
Gopalakrishnan Elango S. Convolutional Neural Network Acceleration on GPU by Exploiting Data Reuse. [Masters Thesis]. San Jose State University; 2017. Available from: https://doi.org/10.31979/etd.9b4r-na7x ; https://scholarworks.sjsu.edu/etd_theses/4800

Rochester Institute of Technology
28.
Petroski Such, Felipe.
Deep Learning Architectures for Novel Problems.
Degree: MS, Computer Engineering, 2017, Rochester Institute of Technology
URL: https://scholarworks.rit.edu/theses/9611
► With convolutional neural networks revolutionizing the computer vision field it is important to extend the capabilities of neural-based systems to dynamic and unrestricted data…
(more)
▼ With convolutional neural networks revolutionizing the computer vision field it is important to extend the capabilities of neural-based systems to dynamic and unrestricted data like graphs. Doing so not only expands the applications of such systems, but also provide more insight into improvements to neural-based systems.
Currently most implementations of graph neural networks are based on vertex filtering on fixed adjacency matrices. Although important for a lot of applications, vertex filtering restricts the applications to vertex focused graphs and cannot be efficiently extended to edge focused graphs like social networks. Applications of current systems are mostly limited to images and document references.
Beyond the graph applications, this work also explored the usage of convolutional neural networks for intelligent character recognition in a novel way. Most systems define Intelligent Character Recognition as either a recurrent classification problem or image classification. This achieves great performance in a limited environment but does not generalize well on real world applications. This work defines intelligent Character Recognition as a segmentation problem which we show to provide many benefits.
The goal of this work was to explore alternatives to current graph neural networks implementations as well as exploring new applications of such system. This work also focused on improving Intelligent Character Recognition techniques on isolated words using
deep learning techniques. Due to the contrast between these to contributions this documents was divided into Part I focusing on the graph work, and Part II focusing on the intelligent character recognition work.
Advisors/Committee Members: Raymond Ptucha.
Subjects/Keywords: Deep learning; ICR; Machine intelligence
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Petroski Such, F. (2017). Deep Learning Architectures for Novel Problems. (Masters Thesis). Rochester Institute of Technology. Retrieved from https://scholarworks.rit.edu/theses/9611
Chicago Manual of Style (16th Edition):
Petroski Such, Felipe. “Deep Learning Architectures for Novel Problems.” 2017. Masters Thesis, Rochester Institute of Technology. Accessed February 27, 2021.
https://scholarworks.rit.edu/theses/9611.
MLA Handbook (7th Edition):
Petroski Such, Felipe. “Deep Learning Architectures for Novel Problems.” 2017. Web. 27 Feb 2021.
Vancouver:
Petroski Such F. Deep Learning Architectures for Novel Problems. [Internet] [Masters thesis]. Rochester Institute of Technology; 2017. [cited 2021 Feb 27].
Available from: https://scholarworks.rit.edu/theses/9611.
Council of Science Editors:
Petroski Such F. Deep Learning Architectures for Novel Problems. [Masters Thesis]. Rochester Institute of Technology; 2017. Available from: https://scholarworks.rit.edu/theses/9611

McMaster University
29.
Chi, Zhixiang.
IMAGE RESTORATIONS USING DEEP LEARNING TECHNIQUES.
Degree: MASc, 2018, McMaster University
URL: http://hdl.handle.net/11375/24290
► Conventional methods for solving image restoration problems are typically built on an image degradation model and on some priors of the latent image. The model…
(more)
▼ Conventional methods for solving image restoration problems are typically built on an image degradation model and on some priors of the latent image. The model of the degraded image and the prior knowledge of the latent image are necessary because the restoration is an ill posted inverse problem. However, for some applications, such as those addressed in this thesis, the image degradation process is too complex to model precisely; in addition, mathematical priors, such as low rank and sparsity of the image signal, are often too idealistic for real world images. These difficulties limit the performance of existing image restoration algorithms, but they can be, to certain extent, overcome by the techniques of machine learning, particularly deep convolutional neural networks. Machine learning allows large sample statistics far beyond what is available in a single input image to be exploited. More importantly, the big data can be used to train deep neural networks to learn the complex non-linear mapping between the degraded and original images. This circumvents the difficulty of building an explicit realistic mathematical model when the degradation causes are complex and compounded.
In this thesis, we design and implement deep convolutional neural networks (DCNN) for two challenging image restoration problems: reflection removal and joint demosaicking-deblurring. The first problem is one of blind source separation; its DCNN solution requires a large set of paired clean and mixed images for training. As these paired training images are very difficult, if not impossible, to acquire in the real world, we develop a novel technique to synthesize the required training images that satisfactorily approximate the real ones. For the joint demosaicking-deblurring problem, we propose a new multiscale DCNN architecture consisting of a cascade of subnetworks so that the underlying blind deconvolution task can be broken into smaller subproblems and solved more effectively and robustly. In both cases extensive experiments are carried out. Experimental results demonstrate clear advantages of the proposed DCNN methods over existing ones.
Thesis
Master of Applied Science (MASc)
Advisors/Committee Members: Wu, Xiaolin, Electrical and Computer Engineering.
Subjects/Keywords: Image restoration; Deep learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chi, Z. (2018). IMAGE RESTORATIONS USING DEEP LEARNING TECHNIQUES. (Masters Thesis). McMaster University. Retrieved from http://hdl.handle.net/11375/24290
Chicago Manual of Style (16th Edition):
Chi, Zhixiang. “IMAGE RESTORATIONS USING DEEP LEARNING TECHNIQUES.” 2018. Masters Thesis, McMaster University. Accessed February 27, 2021.
http://hdl.handle.net/11375/24290.
MLA Handbook (7th Edition):
Chi, Zhixiang. “IMAGE RESTORATIONS USING DEEP LEARNING TECHNIQUES.” 2018. Web. 27 Feb 2021.
Vancouver:
Chi Z. IMAGE RESTORATIONS USING DEEP LEARNING TECHNIQUES. [Internet] [Masters thesis]. McMaster University; 2018. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/11375/24290.
Council of Science Editors:
Chi Z. IMAGE RESTORATIONS USING DEEP LEARNING TECHNIQUES. [Masters Thesis]. McMaster University; 2018. Available from: http://hdl.handle.net/11375/24290

Penn State University
30.
Lageman, Nathaniel John.
BinDNN: Resilient Function Matching Using Deep Learning.
Degree: 2016, Penn State University
URL: https://submit-etda.libraries.psu.edu/catalog/12477njl5114
► Determining if two functions taken from different compiled binaries originate from the same function in the source code has many applications to malware reverse engineering.…
(more)
▼ Determining if two functions taken from different compiled binaries originate from the same function in the source code has many applications to malware reverse engineering. Namely, this process allows an analyst to filter large swaths of code, removing functions that have been previously observed or those that originate in shared or trusted libraries. However, this task is challenging due to the myriad factors that influence the translation between source code and assembly instructions—the instruction stream created by a compiler is heavily influenced by a number of factors including optimizations, target platforms, and runtime constraints. In this paper, we seek to advance methods for reliably testing the equivalence of functions found in different executables. By leveraging advances in
deep learning and natural language processing, we design and evaluate a novel algorithm, BINDNN, that is resilient to variations in compiler, compiler optimization level, and architecture. We show that BINDNN is effective both in isolation or in conjunction with existing approaches. In the case of the latter, we boost performance by 109% when combining BINDNN with BinDiff to compare functions across architectures. This result—an improvement of 32% for BINDNN and 185% for BinDiff—demonstrates the utility of employing multiple orthogonal approaches to function matching.
Advisors/Committee Members: Patrick Drew Mcdaniel, Thesis Advisor/Co-Advisor.
Subjects/Keywords: reverse engineering; malware; deep learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lageman, N. J. (2016). BinDNN: Resilient Function Matching Using Deep Learning. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/12477njl5114
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Lageman, Nathaniel John. “BinDNN: Resilient Function Matching Using Deep Learning.” 2016. Thesis, Penn State University. Accessed February 27, 2021.
https://submit-etda.libraries.psu.edu/catalog/12477njl5114.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Lageman, Nathaniel John. “BinDNN: Resilient Function Matching Using Deep Learning.” 2016. Web. 27 Feb 2021.
Vancouver:
Lageman NJ. BinDNN: Resilient Function Matching Using Deep Learning. [Internet] [Thesis]. Penn State University; 2016. [cited 2021 Feb 27].
Available from: https://submit-etda.libraries.psu.edu/catalog/12477njl5114.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Lageman NJ. BinDNN: Resilient Function Matching Using Deep Learning. [Thesis]. Penn State University; 2016. Available from: https://submit-etda.libraries.psu.edu/catalog/12477njl5114
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
◁ [1] [2] [3] [4] [5] … [103] ▶
.