You searched for subject:(Recurrent Neural Network)
.
Showing records 1 – 30 of
161 total matches.
◁ [1] [2] [3] [4] [5] [6] ▶

Texas A&M University
1.
Fan, David Dawei.
Backpropagation for Continuous Theta Neuron Networks.
Degree: MS, Electrical Engineering, 2015, Texas A&M University
URL: http://hdl.handle.net/1969.1/186998
► The Theta neuron model is a spiking neuron model which, unlike traditional Leaky-Integrate-and-Fire neurons, can model spike latencies, threshold adaptation, bistability of resting and tonic…
(more)
▼ The Theta neuron model is a spiking neuron model which, unlike traditional Leaky-Integrate-and-Fire neurons, can model spike latencies, threshold adaptation, bistability of resting and tonic firing states, and more. Previous work on learning rules for networks of theta neurons includes the derivation of a spike-timing based backpropagation algorithm for multilayer feedforward networks. However, this learning rule is only applicable to a fixed number of spikes per neuron, and is unable to take into account the effects of synaptic dynamics. In this thesis a novel backpropagation learning rule for theta neuron networks is derived which incorporates synaptic dynamics, is applicable to changing numbers of spikes per neuron, and does not explicitly depend on spike-timing. The learning rule is successfully applied to XOR, cosine and sinc function mappings, and comparisons between other learning rules for spiking
neural networks are made. The algorithm achieves 97.8 percent training performance and 96.7 percent test performance on the Fischer-Iris dataset, which is comparable to other spiking
neural network learning rules. The algorithm also achieves 99.0 percent training performance and 99.14 percent test performance on the Wisconsin Breast Cancer dataset, which is better than the compared spiking
neural network learning rules.
Advisors/Committee Members: Li, Peng (advisor), Choe, Yoonsuck (committee member), Han, Arum (committee member), Qian, Xiaoning (committee member).
Subjects/Keywords: Spiking Neural Network; Backpropagation; Recurrent Neural Network; Neural Network; Theta Neuron
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Fan, D. D. (2015). Backpropagation for Continuous Theta Neuron Networks. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/186998
Chicago Manual of Style (16th Edition):
Fan, David Dawei. “Backpropagation for Continuous Theta Neuron Networks.” 2015. Masters Thesis, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/186998.
MLA Handbook (7th Edition):
Fan, David Dawei. “Backpropagation for Continuous Theta Neuron Networks.” 2015. Web. 07 Mar 2021.
Vancouver:
Fan DD. Backpropagation for Continuous Theta Neuron Networks. [Internet] [Masters thesis]. Texas A&M University; 2015. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/186998.
Council of Science Editors:
Fan DD. Backpropagation for Continuous Theta Neuron Networks. [Masters Thesis]. Texas A&M University; 2015. Available from: http://hdl.handle.net/1969.1/186998

Penn State University
2.
Lin, Tao.
A DATA TRIAGE RETRIEVAL SYSTEM FOR CYBER SECURITY OPERATIONS CENTER.
Degree: 2018, Penn State University
URL: https://submit-etda.libraries.psu.edu/catalog/14787txl78
► Triage analysis is a fundamental stage in cyber operations in Security Operations Centers (SOCs). The massive data sources generate great demands on cyber security analysts'…
(more)
▼ Triage analysis is a fundamental stage in cyber operations in Security Operations Centers (SOCs). The massive data sources generate great demands on cyber security analysts' capability of information processing and analytical reasoning. Furthermore, most junior security analysts perform much less efficiently than senior analysts in deciding what data triage operations to perform. To help analysts perform better, retrieval methods need to be proposed to facilitate data triaging through retrieval of the relevant historical data triage operations of senior security analysts. This thesis conducts a research of retrieval methods based on
recurrent neural network, including rule-based retrieval and context-based retrieval of data triage operations. It further discusses the new directions in solving the data triage operation retrieval problem.
The present situation is that most novice analysts who are responsible for performing data triage tasks suffer a great deal from the complexity and intensity of their tasks. To fill the gap, we propose to provide novice analysts with on-the-job suggestions by presenting the relevant data triage operations conducted by senior analysts in a previous task. A tracing method has been developed to track an analyst's data triage operations. This thesis mainly presents a data triage operation retrieval system that (1) models the context of a data triage analytic process, (2) uses
recurrent neural network to compare matching contexts, and (3) presents the matched traces to the novice analysts as suggestions. We have implemented and evaluated the performance of the system through both automated testing and human evaluation. The results show that the proposed retrieval system can effectively identify the relevant traces based on an analyst's current analytic process.
Advisors/Committee Members: Peng Liu, Thesis Advisor/Co-Advisor.
Subjects/Keywords: Recurrent Neural Network; Machine Learning; Retrieval; Security
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lin, T. (2018). A DATA TRIAGE RETRIEVAL SYSTEM FOR CYBER SECURITY OPERATIONS CENTER. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/14787txl78
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Lin, Tao. “A DATA TRIAGE RETRIEVAL SYSTEM FOR CYBER SECURITY OPERATIONS CENTER.” 2018. Thesis, Penn State University. Accessed March 07, 2021.
https://submit-etda.libraries.psu.edu/catalog/14787txl78.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Lin, Tao. “A DATA TRIAGE RETRIEVAL SYSTEM FOR CYBER SECURITY OPERATIONS CENTER.” 2018. Web. 07 Mar 2021.
Vancouver:
Lin T. A DATA TRIAGE RETRIEVAL SYSTEM FOR CYBER SECURITY OPERATIONS CENTER. [Internet] [Thesis]. Penn State University; 2018. [cited 2021 Mar 07].
Available from: https://submit-etda.libraries.psu.edu/catalog/14787txl78.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Lin T. A DATA TRIAGE RETRIEVAL SYSTEM FOR CYBER SECURITY OPERATIONS CENTER. [Thesis]. Penn State University; 2018. Available from: https://submit-etda.libraries.psu.edu/catalog/14787txl78
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Mid Sweden University
3.
Wang, Xutao.
Chinese Text Classification Based On Deep Learning.
Degree: Information Systems and Technology, 2018, Mid Sweden University
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-35322
► Text classification has always been a concern in area of natural language processing, especially nowadays the data are getting massive due to the development…
(more)
▼ Text classification has always been a concern in area of natural language processing, especially nowadays the data are getting massive due to the development of internet. Recurrent neural network (RNN) is one of the most popular method for natural language processing due to its recurrent architecture which give it ability to process serialized information. In the meanwhile, Convolutional neural network (CNN) has shown its ability to extract features from visual imagery. This paper combine the advantages of RNN and CNN and proposed a model called BLSTM-C for Chinese text classification. BLSTM-C begins with a Bidirectional long short-term memory (BLSTM) layer which is an special kind of RNN to get a sequence output based on the past context and the future context. Then it feed this sequence to CNN layer which is utilized to extract features from the previous sequence. We evaluate BLSTM-C model on several tasks such as sentiment classification and category classification and the result shows our model’s remarkable performance on these text tasks.
Subjects/Keywords: Text classification; Recurrent neural network; Convolutional neural network; Computer Systems; Datorsystem
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, X. (2018). Chinese Text Classification Based On Deep Learning. (Thesis). Mid Sweden University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-35322
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Wang, Xutao. “Chinese Text Classification Based On Deep Learning.” 2018. Thesis, Mid Sweden University. Accessed March 07, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-35322.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Wang, Xutao. “Chinese Text Classification Based On Deep Learning.” 2018. Web. 07 Mar 2021.
Vancouver:
Wang X. Chinese Text Classification Based On Deep Learning. [Internet] [Thesis]. Mid Sweden University; 2018. [cited 2021 Mar 07].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-35322.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Wang X. Chinese Text Classification Based On Deep Learning. [Thesis]. Mid Sweden University; 2018. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-35322
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

York University
4.
Bidgoli, Rohollah Soltani.
Higher Order Recurrent Neural Network for Language Modeling.
Degree: MSc -MS, Computer Science, 2016, York University
URL: http://hdl.handle.net/10315/32337
► In this thesis, we study novel neural network structures to better model long term dependency in sequential data. We propose to use more memory units…
(more)
▼ In this thesis, we study novel
neural network structures to better model long term dependency in sequential data.
We propose to use more memory units to keep track of more preceding states in
recurrent neural networks (RNNs), which are all recurrently fed to the hidden layers as feedback through different weighted paths. By extending the popular
recurrent structure in RNNs, we provide the models with better short-term memory mechanism to learn long term dependency in sequences. Analogous to digital filters in signal processing, we call these structures as higher order RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned using the back-propagation through time method. HORNNs are generally applicable to a variety of sequence modelling tasks. In this work, we have examined HORNNs for the language modeling task using two popular data sets, namely the Penn Treebank (PTB) and English text8 data sets. Experimental results have shown that the proposed HORNNs yield the state-of-the-art performance on both data sets, significantly outperforming the regular RNNs as well as the popular LSTMs.
Advisors/Committee Members: Jiang, Hui (advisor).
Subjects/Keywords: Computer science; Machine Learning; Deep Learning; Neural Network; Recurrent Neural Network; Language Modeling; Higher Order Recurrent Neural Network
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bidgoli, R. S. (2016). Higher Order Recurrent Neural Network for Language Modeling. (Masters Thesis). York University. Retrieved from http://hdl.handle.net/10315/32337
Chicago Manual of Style (16th Edition):
Bidgoli, Rohollah Soltani. “Higher Order Recurrent Neural Network for Language Modeling.” 2016. Masters Thesis, York University. Accessed March 07, 2021.
http://hdl.handle.net/10315/32337.
MLA Handbook (7th Edition):
Bidgoli, Rohollah Soltani. “Higher Order Recurrent Neural Network for Language Modeling.” 2016. Web. 07 Mar 2021.
Vancouver:
Bidgoli RS. Higher Order Recurrent Neural Network for Language Modeling. [Internet] [Masters thesis]. York University; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10315/32337.
Council of Science Editors:
Bidgoli RS. Higher Order Recurrent Neural Network for Language Modeling. [Masters Thesis]. York University; 2016. Available from: http://hdl.handle.net/10315/32337

Tampere University
5.
Zhou, Yi.
Sentiment classification with deep neural networks
.
Degree: 2019, Tampere University
URL: https://trepo.tuni.fi//handle/10024/116148
► Sentiment classification is an important task in Natural Language Processing (NLP) area. Deep neural networks become the mainstream method to perform the text sentiment classification…
(more)
▼ Sentiment classification is an important task in Natural Language Processing (NLP) area. Deep neural networks become the mainstream method to perform the text sentiment classification nowadays. In this thesis two datasets are used. The first dataset is a hotel review dataset(TripAdvisor dataset) that collects the hotel reviews from the TripAdvisor website using Python Scrapy framework. The preprocessing steps are then applied to clean the dataset. A record in the TripAdvisor dataset consists of the text review and corresponding sentiment score. There are 5 sentimental labels: very negative, negative, neutral, positive, and very positive. The second dataset is the Stanford Sentiment Treebank (SST) dataset. It is a public and common dataset for sentiment classification.
Text Convolutional Neural Network (Text-CNN), Very Deep Convolutional Neural Network (VDCNN), and Bidirectional Long Short Term Memory neural network (BiLSTM) were chosen as different methods for the evaluation in the experiments. The Text-CNN was the first work to apply convolutional neural network architecture for the text classification. The VD-CNN applied deep convolutional layers, with up to 29 layers, to perform the text classification. The BiLSTM exploited the bidirectional recurrent neural network with long short term memory cell mechanism. On the other hand, word embedding techniques are also considered as an important factor in sentiment classification. Thus, in this thesis, GloVe and FastText techniques were used to investigate the effect of word embedding initialization on the dataset. GloVe is a unsupervised word embedding learning algorithm. FastText uses shallow neural network to generate word vectors and it has fast convergence speed for training and high speed for inference.
The experiment was implemented using PyTorch framework. It shows that the BiLSTM with GloVe as the word vector initialization achieved the highest accuracy 73.73% while the VD-CNN with FastText had the lowest accuracy 71.95% on the TripAdvisor dataset. The BiLSTM model achieved 0.68 F1-score while the VD-CNN model obtained 0.67 F1-score on the TripAdvisor dataset. On the SST dataset, BiLSTM with GloVe again achieved the highest accuracy 36.35% and 0.35 F1-score. The VD-CNN model with GloVe had the worst evaluation result in terms of accuracy and F1-score. The Text-CNN model performed better than the VD-CNN model even thought the VD-CNN model has more layers in most cases.
By analyzing the misclassified reviews in the TripAdvisor dataset from the three deep neural networks, it is shown that the hotel reviews with more contradictory sentimental words were more prone to misclassification than other hotel reviews.
Subjects/Keywords: deep neural networks;
convolutional neural network;
recurrent neural network;
sentiment classification;
hotel reviews;
TripAdvisor
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhou, Y. (2019). Sentiment classification with deep neural networks
. (Masters Thesis). Tampere University. Retrieved from https://trepo.tuni.fi//handle/10024/116148
Chicago Manual of Style (16th Edition):
Zhou, Yi. “Sentiment classification with deep neural networks
.” 2019. Masters Thesis, Tampere University. Accessed March 07, 2021.
https://trepo.tuni.fi//handle/10024/116148.
MLA Handbook (7th Edition):
Zhou, Yi. “Sentiment classification with deep neural networks
.” 2019. Web. 07 Mar 2021.
Vancouver:
Zhou Y. Sentiment classification with deep neural networks
. [Internet] [Masters thesis]. Tampere University; 2019. [cited 2021 Mar 07].
Available from: https://trepo.tuni.fi//handle/10024/116148.
Council of Science Editors:
Zhou Y. Sentiment classification with deep neural networks
. [Masters Thesis]. Tampere University; 2019. Available from: https://trepo.tuni.fi//handle/10024/116148

University of Illinois – Urbana-Champaign
6.
Yan, Zhicheng.
Image recognition, semantic segmentation and photo adjustment using deep neural networks.
Degree: PhD, Computer Science, 2016, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/90724
► Deep Neural Networks (DNNs) have proven to be effective models for solving various problems in computer vision. Multi-Layer Perceptron Networks, Convolutional Neural Networks and Recurrent…
(more)
▼ Deep
Neural Networks (DNNs) have proven to be effective models for solving various problems in computer vision. Multi-Layer Perceptron Networks, Convolutional
Neural Networks and
Recurrent Neural Networks are representative examples of DNNs in the setting of supervised learning. The key ingredients in the successful development of DNN-based models include but not limited to task-specific designs of
network architecture, discriminative feature representation learning and scalable training algorithms.
In this thesis, we describe a collection of DNN-based models to address three challenging computer vision tasks, namely large-scale visual recognition, image semantic segmentation and automatic photo adjustment. For each task, the
network architecture is carefully designed on the basis of the nature of the task. For large-scale visual recognition, we design a hierarchical Convolutional
Neural Network to fully exploit a semantic hierarchy among visual categories. The resulting model can be deemed as an ensemble of specialized classifiers. We improve state-of-the-art results at an affordable increase of the computational cost. For image semantic segmentation, we integrate convolutional layers with novel spatially
recurrent layers for incorporating global contexts into the prediction process. The resulting hybrid
network is capable of learning improved feature representations, which lead to more accurate region recognition and boundary localization. Combined with a post-processing step involving a fully-connected conditional random field, our hybrid
network achieves new state-of-the-art results on a large benchmark dataset. For automatic photo adjustment, we take a data-driven approach to learn the underlying color transforms from manually enhanced examples. We formulate the learning problem as a regression task, which can be approached with a Multi-Layer Perceptron
network. We concatenate global contextual features, local contextual features as well as pixel-wise features and feed them into the deep
network. State-of-the-art results are achieved on datasets with both global and local stylized adjustments.
Advisors/Committee Members: Yu, Yizhou (advisor), Lazebnik, Svetlana (Committee Chair), Forsyth, David (committee member), Cao, Liangliang (committee member).
Subjects/Keywords: Deep Neural Network; Image Recognition; Semantic Segmentation; Photo Adjustment; Convolutional Neural Network; Recurrent Neural Network; Multi-Layer Perceptron Network
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yan, Z. (2016). Image recognition, semantic segmentation and photo adjustment using deep neural networks. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/90724
Chicago Manual of Style (16th Edition):
Yan, Zhicheng. “Image recognition, semantic segmentation and photo adjustment using deep neural networks.” 2016. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed March 07, 2021.
http://hdl.handle.net/2142/90724.
MLA Handbook (7th Edition):
Yan, Zhicheng. “Image recognition, semantic segmentation and photo adjustment using deep neural networks.” 2016. Web. 07 Mar 2021.
Vancouver:
Yan Z. Image recognition, semantic segmentation and photo adjustment using deep neural networks. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/2142/90724.
Council of Science Editors:
Yan Z. Image recognition, semantic segmentation and photo adjustment using deep neural networks. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2016. Available from: http://hdl.handle.net/2142/90724

Carnegie Mellon University
7.
Le, Ngan Thi Hoang.
Contextual Recurrent Level Set Networks and Recurrent Residual Networks for Semantic Labeling.
Degree: 2018, Carnegie Mellon University
URL: http://repository.cmu.edu/dissertations/1166
► Semantic labeling is becoming more and more popular among researchers in computer vision and machine learning. Many applications, such as autonomous driving, tracking, indoor navigation,…
(more)
▼ Semantic labeling is becoming more and more popular among researchers in computer vision and machine learning. Many applications, such as autonomous driving, tracking, indoor navigation, augmented reality systems, semantic searching, medical imaging are on the rise, requiring more accurate and efficient segmentation mechanisms. In recent years, deep learning approaches based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have dramatically emerged as the dominant paradigm for solving many problems in computer vision and machine learning. The main focus of this thesis is to investigate robust approaches that can tackle the challenging semantic labeling tasks including semantic instance segmentation and scene understanding. In the first approach, we convert the classic variational Level Set method to a learnable deep framework by proposing a novel definition of contour evolution named Recurrent Level Set (RLS). The proposed RLS employs Gated Recurrent Units to solve the energy minimization of a variational Level Set functional. The curve deformation processes in RLS is formulated as a hidden state evolution procedure and is updated by minimizing an energy functional composed of fitting forces and contour length. We show that by sharing the convolutional features in a fully end-to-end trainable framework, RLS is able to be extended to Contextual Recurrent Level Set (CRLS) Networks to address semantic segmentation in the wild problem. The experimental results have shown that our proposed RLS improves both computational time and segmentation accuracy against the classic variational Level Set-based methods whereas the fully end-to-end system CRLS achieves competitive performance compared to the state-of-the-art semantic segmentation approaches on PAS CAL VOC 2012 and MS COCO 2014 databases. The second proposed approach, Contextual Recurrent Residual Networks (CRRN), inherits all the merits of sequence learning information and residual learning in order to simultaneously model long-range contextual infor- mation and learn powerful visual representation within a single deep network. Our proposed CRRN deep network consists of three parts corresponding to sequential input data, sequential output data and hidden state as in a recurrent network. Each unit in hidden state is designed as a combination of two components: a context-based component via sequence learning and a visualbased component via residual learning. That means, each hidden unit in our proposed CRRN simultaneously (1) learns long-range contextual dependencies via a context-based component. The relationship between the current unit and the previous units is performed as sequential information under an undirected cyclic graph (UCG) and (2) provides powerful encoded visual representation via residual component which contains blocks of convolution and/or batch normalization layers equipped with an identity skip connection. Furthermore, unlike previous scene labeling approaches [1, 2, 3], our method is not only able to exploit the…
Subjects/Keywords: Gated Recurrent Unit; Level Set; Recurrent Neural Networks; Residual Network; Scene Labeling; Semantic Instance Segmentation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Le, N. T. H. (2018). Contextual Recurrent Level Set Networks and Recurrent Residual Networks for Semantic Labeling. (Thesis). Carnegie Mellon University. Retrieved from http://repository.cmu.edu/dissertations/1166
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Le, Ngan Thi Hoang. “Contextual Recurrent Level Set Networks and Recurrent Residual Networks for Semantic Labeling.” 2018. Thesis, Carnegie Mellon University. Accessed March 07, 2021.
http://repository.cmu.edu/dissertations/1166.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Le, Ngan Thi Hoang. “Contextual Recurrent Level Set Networks and Recurrent Residual Networks for Semantic Labeling.” 2018. Web. 07 Mar 2021.
Vancouver:
Le NTH. Contextual Recurrent Level Set Networks and Recurrent Residual Networks for Semantic Labeling. [Internet] [Thesis]. Carnegie Mellon University; 2018. [cited 2021 Mar 07].
Available from: http://repository.cmu.edu/dissertations/1166.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Le NTH. Contextual Recurrent Level Set Networks and Recurrent Residual Networks for Semantic Labeling. [Thesis]. Carnegie Mellon University; 2018. Available from: http://repository.cmu.edu/dissertations/1166
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
8.
Sarika, Pawan Kumar.
Comparing LSTM and GRU for Multiclass Sentiment Analysis of Movie Reviews.
Degree: 2020, , Faculty of Computing
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20213
► Today, we are living in a data-driven world. Due to a surge in data generation, there is a need for efficient and accurate techniques…
(more)
▼ Today, we are living in a data-driven world. Due to a surge in data generation, there is a need for efficient and accurate techniques to analyze data. One such kind of data which is needed to be analyzed are text reviews given for movies. Rather than classifying the reviews as positive or negative, we will classify the sentiment of the reviews on the scale of one to ten. In doing so, we will compare two recurrent neural network algorithms Long short term memory(LSTM) and Gated recurrent unit(GRU). The main objective of this study is to compare the accuracies of LSTM and GRU models. For training models, we collected data from two different sources. For filtering data, we used porter stemming and stop words. We coupled LSTM and GRU with the convolutional neural networks to increase the performance. After conducting experiments, we have observed that LSTM performed better in predicting border values. Whereas, GRU predicted every class equally. Overall GRU was able to predict multiclass text data of movie reviews slightly better than LSTM. GRU was computationally expansive when compared to LSTM.
Subjects/Keywords: Gated recurrent unit; Multiclass classification; Movie reviews; Sentiment Analysis; Recurrent neural network; Computer Systems; Datorsystem
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sarika, P. K. (2020). Comparing LSTM and GRU for Multiclass Sentiment Analysis of Movie Reviews. (Thesis). , Faculty of Computing. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20213
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Sarika, Pawan Kumar. “Comparing LSTM and GRU for Multiclass Sentiment Analysis of Movie Reviews.” 2020. Thesis, , Faculty of Computing. Accessed March 07, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20213.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Sarika, Pawan Kumar. “Comparing LSTM and GRU for Multiclass Sentiment Analysis of Movie Reviews.” 2020. Web. 07 Mar 2021.
Vancouver:
Sarika PK. Comparing LSTM and GRU for Multiclass Sentiment Analysis of Movie Reviews. [Internet] [Thesis]. , Faculty of Computing; 2020. [cited 2021 Mar 07].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20213.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Sarika PK. Comparing LSTM and GRU for Multiclass Sentiment Analysis of Movie Reviews. [Thesis]. , Faculty of Computing; 2020. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20213
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Addis Ababa University
9.
Tewodros, Kibatu.
ecurrent Neural Network-based Base Transceiver Station Power System Failure Prediction
.
Degree: 2019, Addis Ababa University
URL: http://etd.aau.edu.et/handle/123456789/21111
► Global network infrastructures are increasing with the development of new technologies and growth in Internet traffic. As network infrastructures increases, maintaining and monitoring them will…
(more)
▼ Global
network infrastructures are increasing with the development of new technologies and
growth in Internet traffic. As
network infrastructures increases, maintaining and monitoring them
will become very challenging since thousands of alarms are generated every day. Clearing those
alarms by corrective maintenance activities require considerable effort and resources (car, labor,
and budget).
In mobile networks, a Base Transceiver Station (BTS) is one key infrastructure element
performing the task of connecting customer equipment with the cellular
network. BTS services
may be interrupted due to transmission, optical fiber cut, power system failure, natural disaster or
many more. In the case of Ethio Telecom (ET), the sole telecom service provider in Ethiopia,
power system failure takes the biggest share for interruption of BTS services. Minimizing power
system failure will reduce downtime of the BTS thereby, guarantee customer satisfaction and
maximize revenue. Recently, machine learning algorithms are used to predict failure in various
areas like power distribution, hydropower generation plants, solar power generation plants, high
voltage transmission grid and many more.
This thesis investigates predicting BTSs power system failure using a
recurrent neural network
(RNN) types namely, long short term memory (LSTM) and gated
recurrent unit (GRU) with linear
and sigmoid activation function applied for the output. In parallel, the prediction performance of
LSTM and GRU has been compared. Data collected from five BTS sites for twenty weeks of
observations are used to train and test the model. The data are prepared with two different data
arrangements, which are a single site and multiple sites. The relevance of using different data size
is, to check the impact of increasing data size with different arrangements on the prediction results.
Mean squared error (MSE) and number of epoch are used to evaluate the performance of the
models with different configurations. Based on the results found, GRU using sigmoid activation
function with feature reduction achieves better performance than LSTM. In addition, both LSTM
and GRU can be used for predicting BTS power system failure.
Advisors/Committee Members: Dereje, Hailemariam (PhD) (advisor).
Subjects/Keywords: Base Transceiver Station;
Gated Recurrent Unit;
Long Short Term Memory;
Recurrent Neural Network
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tewodros, K. (2019). ecurrent Neural Network-based Base Transceiver Station Power System Failure Prediction
. (Thesis). Addis Ababa University. Retrieved from http://etd.aau.edu.et/handle/123456789/21111
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Tewodros, Kibatu. “ecurrent Neural Network-based Base Transceiver Station Power System Failure Prediction
.” 2019. Thesis, Addis Ababa University. Accessed March 07, 2021.
http://etd.aau.edu.et/handle/123456789/21111.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Tewodros, Kibatu. “ecurrent Neural Network-based Base Transceiver Station Power System Failure Prediction
.” 2019. Web. 07 Mar 2021.
Vancouver:
Tewodros K. ecurrent Neural Network-based Base Transceiver Station Power System Failure Prediction
. [Internet] [Thesis]. Addis Ababa University; 2019. [cited 2021 Mar 07].
Available from: http://etd.aau.edu.et/handle/123456789/21111.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Tewodros K. ecurrent Neural Network-based Base Transceiver Station Power System Failure Prediction
. [Thesis]. Addis Ababa University; 2019. Available from: http://etd.aau.edu.et/handle/123456789/21111
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of California – Berkeley
10.
Thanapirom, Chayut.
Neural Representation Learning with Denoising Autoencoder Framework.
Degree: Physics, 2016, University of California – Berkeley
URL: http://www.escholarship.org/uc/item/0hm6p6s5
► Understanding of how the brain works and how it can solve difficult problems like image recognition is very important, especially for the progress in developing…
(more)
▼ Understanding of how the brain works and how it can solve difficult problems like image recognition is very important, especially for the progress in developing an autonomous intelligent system. Even though we have a lot of experimental data in neuroscience, we are lack of theories which can glue all the observations together. One approach to understand the brain is to investigate the representation of sensory information in the brain at each stage, and try explains it with some computational level principle, for example an efficient coding principle. This thesis follows this approach.In this thesis, I use the denoising autoencoder framework to approach two unsolved problems in Computational Neuroscience. The first problem is learning the group structure in the group sparse coding model. I propose that it is possible to learn the group structure using gradient descent with a data denoising objective function. To verify the method, I train a model on the van Hateren's natural image dataset. The model with the learned group structure shows an improvement in denoising performance 15% (SNR) over the regular sparse coding model. Moreover, the group structure learned groups together sparse coding basis functions with similar location, orientation and scale.The second problem is to understand why we have grid cells. I proposed that a population of place cells and grid cells should be modeled as an attractor network with noisy neurons. Furthermore, this attractor network can be trained with the denoising autoencoder framework to memorized a location in 1D space (simplification of the actual problem where the location is in 2D space). I show that the retrieved location accuracy of the network with both place cells and grid cells is higher than the network with place cells alone. The performance difference is due to the activity of the grid cells acts as an efficient error-correcting code.
Subjects/Keywords: Biophysics; Neurosciences; Attractor Network; Denoising Autoencoder; Grid Cells; Neural Representation; Recurrent Neural Network; Sparse Coding
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Thanapirom, C. (2016). Neural Representation Learning with Denoising Autoencoder Framework. (Thesis). University of California – Berkeley. Retrieved from http://www.escholarship.org/uc/item/0hm6p6s5
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Thanapirom, Chayut. “Neural Representation Learning with Denoising Autoencoder Framework.” 2016. Thesis, University of California – Berkeley. Accessed March 07, 2021.
http://www.escholarship.org/uc/item/0hm6p6s5.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Thanapirom, Chayut. “Neural Representation Learning with Denoising Autoencoder Framework.” 2016. Web. 07 Mar 2021.
Vancouver:
Thanapirom C. Neural Representation Learning with Denoising Autoencoder Framework. [Internet] [Thesis]. University of California – Berkeley; 2016. [cited 2021 Mar 07].
Available from: http://www.escholarship.org/uc/item/0hm6p6s5.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Thanapirom C. Neural Representation Learning with Denoising Autoencoder Framework. [Thesis]. University of California – Berkeley; 2016. Available from: http://www.escholarship.org/uc/item/0hm6p6s5
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Bridgeport
11.
Hassan, Abdalraouf.
Deep Neural Language Model for Text Classification Based on Convolutional and Recurrent Neural Networks
.
Degree: 2018, University of Bridgeport
URL: https://scholarworks.bridgeport.edu/xmlui/handle/123456789/2274
► The evolution of the social media and the e-commerce sites produces a massive amount of unstructured text data on the internet. Thus, there is a…
(more)
▼ The evolution of the social media and the e-commerce sites produces a massive amount of unstructured text data on the internet. Thus, there is a high demand to develop an intelligent model to process it and extract a useful information from it. Text classification plays an important task for many Natural Language Processing (NLP) applications such as, sentiment analysis, web search, spam filtering, and information retrieval, in which we need to assign single or multiple predefined categories to a sequence of text. In Neural Network Language Models learning long-term dependencies with gradient descent is difficult due to the vanishing gradient problem. Recently researchers started to increase the depth of the network in order to overcome the limitations of the existing techniques. However, increasing the depth of the network means increasing the number of the parameters, which makes the network computationally expensive, and more prone to overfitting. Furthermore, NLP systems traditionally treat words as discrete atomic symbols; the model can leverage small amounts of information regarding the relationship between the individual symbols. In recent years, deep learning models such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have been applied to language modeling with comparative, remarkable results. CNNs are a noble approach to extract higher-level features invariant to local translation. However, this method requires the stacking of multiple convolutional layers in order to capture long-term dependencies because of the locality of the convolutional and pooling layers. In this dissertation, we introduce a joint CNN-RNN framework to overcome the problems in the existing deep learning models. Briefly, we applied an unsupervised neural language model to train initial word embeddings that are further tuned by our deep learning network, then the pre-trained parameters of the network are used to initialize the model. At a final stage, the proposed framework combines former information with a set of feature maps learned by a convolutional layer with long-term dependencies learned via Long-Short-Term Memory (LSTM). Empirically, we show that our approach, with slight hyperparameter tuning and static vectors, achieves outstanding results on multiple sentiment analysis benchmarks. Our approach outperforms several existing approaches in term of accuracy; our results are also competitive with the state-of-the-art results on the Stanford Large Movie Review (IMDB) dataset, and the Stanford Sentiment Treebank (SSTb) dataset. Our approach has a significant role in reducing the number of parameters and constructing the convolutional layer followed by the recurrent layer with no pooling layers. Our results show that we were able to reduce the loss of detailed, local information and capture long-term dependencies with an efficient framework that has fewer parameters and a high level of performance.
Subjects/Keywords: Convolutional neural network;
Deep learning;
Machine learning;
Natural language processing;
Recurrent neural network;
Sentiment analysis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hassan, A. (2018). Deep Neural Language Model for Text Classification Based on Convolutional and Recurrent Neural Networks
. (Thesis). University of Bridgeport. Retrieved from https://scholarworks.bridgeport.edu/xmlui/handle/123456789/2274
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Hassan, Abdalraouf. “Deep Neural Language Model for Text Classification Based on Convolutional and Recurrent Neural Networks
.” 2018. Thesis, University of Bridgeport. Accessed March 07, 2021.
https://scholarworks.bridgeport.edu/xmlui/handle/123456789/2274.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Hassan, Abdalraouf. “Deep Neural Language Model for Text Classification Based on Convolutional and Recurrent Neural Networks
.” 2018. Web. 07 Mar 2021.
Vancouver:
Hassan A. Deep Neural Language Model for Text Classification Based on Convolutional and Recurrent Neural Networks
. [Internet] [Thesis]. University of Bridgeport; 2018. [cited 2021 Mar 07].
Available from: https://scholarworks.bridgeport.edu/xmlui/handle/123456789/2274.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Hassan A. Deep Neural Language Model for Text Classification Based on Convolutional and Recurrent Neural Networks
. [Thesis]. University of Bridgeport; 2018. Available from: https://scholarworks.bridgeport.edu/xmlui/handle/123456789/2274
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Victoria University of Wellington
12.
Chandra, Rohitash.
Problem Decomposition and
Adaptation in Cooperative
Neuro-Evolution.
Degree: 2012, Victoria University of Wellington
URL: http://hdl.handle.net/10063/2110
► One way to train neural networks is to use evolutionary algorithms such as cooperative coevolution - a method that decomposes the network's learnable parameters into…
(more)
▼ One way to train
neural networks is to use evolutionary algorithms
such as cooperative coevolution - a method that decomposes the
network's
learnable parameters into subsets, called subcomponents. Cooperative
coevolution gains advantage over other methods by evolving particular
subcomponents independently from the rest of the
network. Its success
depends strongly on how the problem decomposition is carried out.
This thesis suggests new forms of problem decomposition, based on a
novel and intuitive choice of modularity, and examines in detail at what
stage and to what extent the different decomposition methods should be
used. The new methods are evaluated by training feedforward networks
to solve pattern classification tasks, and by training
recurrent networks to
solve grammatical inference problems.
Efficient problem decomposition methods group interacting variables
into the same subcomponents. We examine the methods from the literature and provide an analysis of the nature of the
neural network optimization problem in terms of interacting variables. We then present a
novel problem decomposition method that groups interacting variables
and that can be generalized to
neural networks with more than a single
hidden layer.
We then incorporate local search into cooperative neuro-evolution. We
present a memetic cooperative coevolution method that takes into account
the cost of employing local search across several sub-populations.
The optimisation process changes during evolution in terms of diversity and interacting variables. To address this, we examine the adaptation
of the problem decomposition method during the evolutionary process. The results in this thesis show that the proposed methods improve performance
in terms of optimization time, scalability and robustness.
As a further test, we apply the problem decomposition and adaptive
cooperative coevolution methods for training
recurrent neural networks
on chaotic time series problems. The proposed methods show better performance
in terms of accuracy and robustness.
Advisors/Committee Members: Frean, Marcus, Zhang, Mengjie.
Subjects/Keywords: Neural networks; Cooperative coevolution; Recurrent network; Co-operative co-evolution
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chandra, R. (2012). Problem Decomposition and
Adaptation in Cooperative
Neuro-Evolution. (Doctoral Dissertation). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/2110
Chicago Manual of Style (16th Edition):
Chandra, Rohitash. “Problem Decomposition and
Adaptation in Cooperative
Neuro-Evolution.” 2012. Doctoral Dissertation, Victoria University of Wellington. Accessed March 07, 2021.
http://hdl.handle.net/10063/2110.
MLA Handbook (7th Edition):
Chandra, Rohitash. “Problem Decomposition and
Adaptation in Cooperative
Neuro-Evolution.” 2012. Web. 07 Mar 2021.
Vancouver:
Chandra R. Problem Decomposition and
Adaptation in Cooperative
Neuro-Evolution. [Internet] [Doctoral dissertation]. Victoria University of Wellington; 2012. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10063/2110.
Council of Science Editors:
Chandra R. Problem Decomposition and
Adaptation in Cooperative
Neuro-Evolution. [Doctoral Dissertation]. Victoria University of Wellington; 2012. Available from: http://hdl.handle.net/10063/2110

Georgia Tech
13.
Chen, Hua.
Single channel speech enhancement with residual learning and recurrent network.
Degree: MS, Electrical and Computer Engineering, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/62839
► For speech enhancement tasks, non-stationary noise such as babble noise is much harder to suppress than stationary noise. In low SNR environment, it is even…
(more)
▼ For speech enhancement tasks, non-stationary noise such as babble noise is much harder to suppress than stationary noise. In low SNR environment, it is even more challenging to remove noise without creating significant artifacts and distortion. Moreover, many state-of-the-art deep learning based algorithms propose a multiple time-frames to one time-frame regression model. In our work, we propose a speech de-noising
neural network adopting multiple time-frames to multiple time-frames approach, aiming to greatly reduce computation burden for real-world applications as well as maintain decent speech quality. In this paper, we propose two
neural networks, namely ResSE and ResCRN. ResSE takes form of a ResNet architecture and is inspired by DuCNN, an image enhancement
network. With its rich and deep structure and the help of residual connections, ResSE is very efficient at extracting spatial-features and is able to outperform traditional log-MMSE algorithms. ResCRN,with the addition of LSTM layers, is capable at both spatial and temporal modeling. It utilizes both local and global contextual structure information and improves speech quality even when faced with unseen speaker and unseen noises, proving that ResCRN is able to generalize quite well.
Advisors/Committee Members: Anderson, David V. (advisor), Davenport, Mark A. (committee member), Lee, Chin-hui (committee member), Truong, Kwan (committee member).
Subjects/Keywords: Speech enhancement; Machine learning; ResNet; Convolutional recurrent neural network
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chen, H. (2020). Single channel speech enhancement with residual learning and recurrent network. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62839
Chicago Manual of Style (16th Edition):
Chen, Hua. “Single channel speech enhancement with residual learning and recurrent network.” 2020. Masters Thesis, Georgia Tech. Accessed March 07, 2021.
http://hdl.handle.net/1853/62839.
MLA Handbook (7th Edition):
Chen, Hua. “Single channel speech enhancement with residual learning and recurrent network.” 2020. Web. 07 Mar 2021.
Vancouver:
Chen H. Single channel speech enhancement with residual learning and recurrent network. [Internet] [Masters thesis]. Georgia Tech; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1853/62839.
Council of Science Editors:
Chen H. Single channel speech enhancement with residual learning and recurrent network. [Masters Thesis]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62839

UCLA
14.
Li, Siyuan.
Application of Recurrent Neural Networks In Toxic Comment Classification.
Degree: Statistics, 2018, UCLA
URL: http://www.escholarship.org/uc/item/5f87h061
► Moderators of online discussion forums often struggle with controlling extremist comments on their platforms. To help provide an efficient and accurate tool to detect online…
(more)
▼ Moderators of online discussion forums often struggle with controlling extremist comments on their platforms. To help provide an efficient and accurate tool to detect online toxicity, we apply word2vec's Skip-Gram embedding vectors, Recurrent Neural Network models like Bidirectional Long Short-term Memory to tackle a toxic comment classification problem with a labeled dataset from Wikipedia Talk Page. We explore different pre-trained embedding vectors from larger corpora. We also assess the class imbalance issues associated with the dataset by employing sampling techniques and penalizing loss. Models we applied yield high overall accuracy with relatively low cost.
Subjects/Keywords: Statistics; classification; natural language processing; recurrent neural network; word2vec
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, S. (2018). Application of Recurrent Neural Networks In Toxic Comment Classification. (Thesis). UCLA. Retrieved from http://www.escholarship.org/uc/item/5f87h061
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Li, Siyuan. “Application of Recurrent Neural Networks In Toxic Comment Classification.” 2018. Thesis, UCLA. Accessed March 07, 2021.
http://www.escholarship.org/uc/item/5f87h061.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Li, Siyuan. “Application of Recurrent Neural Networks In Toxic Comment Classification.” 2018. Web. 07 Mar 2021.
Vancouver:
Li S. Application of Recurrent Neural Networks In Toxic Comment Classification. [Internet] [Thesis]. UCLA; 2018. [cited 2021 Mar 07].
Available from: http://www.escholarship.org/uc/item/5f87h061.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Li S. Application of Recurrent Neural Networks In Toxic Comment Classification. [Thesis]. UCLA; 2018. Available from: http://www.escholarship.org/uc/item/5f87h061
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Delft University of Technology
15.
Mulder, Boris (author).
Latent Space Modelling of Unsteady Flow Subdomains: Thesis Report.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:fbf93d6e-211f-4b92-a057-956d694db315
► Very complex flows can be expensive to compute using current CFD techniques. In this thesis, models based on deep learning were used to replace certain…
(more)
▼ Very complex flows can be expensive to compute using current CFD techniques. In this thesis, models based on deep learning were used to replace certain parts of the flow domain, with the objective of replacing well-known regions with simplified models to increase efficiency. To keep the error produced by the deep learning model bounded, a traditional CFD model and deep learning model were coupled using a boundary overlap area. In this overlap area, the flow computed by the traditional CFD model was used by the deep learning model as an input. It was demonstrated that since traditional CFD model continuously feeds in reliable information into the deep learning domain, the error remains bounded. Furthermore, it was found that the accuracy of the deep learning models depends significantly on the random initial weights. Therefore, deep learning models trained differently must be carefully compared.
Aerospace Engineering | Aerodynamics and Wind Energy
Advisors/Committee Members: Hulshoff, Steven (mentor), Delft University of Technology (degree granting institution).
Subjects/Keywords: Aerodynamics; CFD; Deep Learning; Latent Space; Autoencoder; Recurrent Neural Network
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mulder, B. (. (2019). Latent Space Modelling of Unsteady Flow Subdomains: Thesis Report. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:fbf93d6e-211f-4b92-a057-956d694db315
Chicago Manual of Style (16th Edition):
Mulder, Boris (author). “Latent Space Modelling of Unsteady Flow Subdomains: Thesis Report.” 2019. Masters Thesis, Delft University of Technology. Accessed March 07, 2021.
http://resolver.tudelft.nl/uuid:fbf93d6e-211f-4b92-a057-956d694db315.
MLA Handbook (7th Edition):
Mulder, Boris (author). “Latent Space Modelling of Unsteady Flow Subdomains: Thesis Report.” 2019. Web. 07 Mar 2021.
Vancouver:
Mulder B(. Latent Space Modelling of Unsteady Flow Subdomains: Thesis Report. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Mar 07].
Available from: http://resolver.tudelft.nl/uuid:fbf93d6e-211f-4b92-a057-956d694db315.
Council of Science Editors:
Mulder B(. Latent Space Modelling of Unsteady Flow Subdomains: Thesis Report. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:fbf93d6e-211f-4b92-a057-956d694db315

Delft University of Technology
16.
Samad, Azlaan Mustafa (author).
Multi Agent Deep Recurrent Q-Learning for Different Traffic Demands.
Degree: 2020, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:84d20f53-3be7-4e85-8588-b92b962b32fe
► In today’s scenario due to rapid urbanisation there has been a shift of population from rural to urban areas especially in developing countries in search…
(more)
▼ In today’s scenario due to rapid urbanisation there has been a shift of population from rural to urban areas especially in developing countries in search of better opportunities. This has lead to unprecedented growth of cities leading to various urbanisation problems. One of the main problems that comes across in urban areas is the increased traffic congestion. This has led to pollution and health issues among people. With the current advancement in Artificial Intelligence, especially in the field of Deep
Neural Networks various attempts have been made to apply it in the field of Traffic Light Control. This thesis is an attempt to take forward the problem of solving traffic congestion thereby reducing the total travel time. One of the contributions of this thesis is to study the performance of Deep
Recurrent Q-
network models in different traffic demands or congestion scenarios. Another contribution of this thesis is to apply different coordination algorithms along with Transfer Learning in Multi-Agent Systems or multiple traffic intersections and study their behaviour. Lastly, the performance of these algorithms are also studied when the number of intersections increase.
Advisors/Committee Members: Oliehoek, Frans (mentor), Vuik, Kees (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Deep Reinforcement Learning; Deep Q-Network; Recurrent Neural Networks
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Samad, A. M. (. (2020). Multi Agent Deep Recurrent Q-Learning for Different Traffic Demands. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:84d20f53-3be7-4e85-8588-b92b962b32fe
Chicago Manual of Style (16th Edition):
Samad, Azlaan Mustafa (author). “Multi Agent Deep Recurrent Q-Learning for Different Traffic Demands.” 2020. Masters Thesis, Delft University of Technology. Accessed March 07, 2021.
http://resolver.tudelft.nl/uuid:84d20f53-3be7-4e85-8588-b92b962b32fe.
MLA Handbook (7th Edition):
Samad, Azlaan Mustafa (author). “Multi Agent Deep Recurrent Q-Learning for Different Traffic Demands.” 2020. Web. 07 Mar 2021.
Vancouver:
Samad AM(. Multi Agent Deep Recurrent Q-Learning for Different Traffic Demands. [Internet] [Masters thesis]. Delft University of Technology; 2020. [cited 2021 Mar 07].
Available from: http://resolver.tudelft.nl/uuid:84d20f53-3be7-4e85-8588-b92b962b32fe.
Council of Science Editors:
Samad AM(. Multi Agent Deep Recurrent Q-Learning for Different Traffic Demands. [Masters Thesis]. Delft University of Technology; 2020. Available from: http://resolver.tudelft.nl/uuid:84d20f53-3be7-4e85-8588-b92b962b32fe

University of Texas – Austin
17.
Zhong, Shijing.
A review on constrained recurrent sparse auto-encoder.
Degree: MSin Computational Science, Engineering, and Mathematics, Computational Science, Engineering, and Mathematics, 2020, University of Texas – Austin
URL: http://dx.doi.org/10.26153/tsw/10925
► Sparse Dictionary Learning generates a sparse representation for images and signals along with a generalized learned dictionary. We examine closely to the constrained recurrent sparse…
(more)
▼ Sparse Dictionary Learning generates a sparse representation for images and signals along with a generalized learned dictionary. We examine closely to the constrained
recurrent sparse auto-encoder (CRsAE) on its Encoder-Decoder plus
recurrent architecture and experimenting CRsAE’s position in the classical dictionary learning problem. We further extend the visualizations, experiments, and metrics to evaluate the model in the context of both VAE and Dictionary Learning.
Advisors/Committee Members: Bajaj, Chandrajit (advisor).
Subjects/Keywords: Convolutional dictionary learning; FISTA; Recurrent neural network; Encoder-decoder
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhong, S. (2020). A review on constrained recurrent sparse auto-encoder. (Masters Thesis). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/10925
Chicago Manual of Style (16th Edition):
Zhong, Shijing. “A review on constrained recurrent sparse auto-encoder.” 2020. Masters Thesis, University of Texas – Austin. Accessed March 07, 2021.
http://dx.doi.org/10.26153/tsw/10925.
MLA Handbook (7th Edition):
Zhong, Shijing. “A review on constrained recurrent sparse auto-encoder.” 2020. Web. 07 Mar 2021.
Vancouver:
Zhong S. A review on constrained recurrent sparse auto-encoder. [Internet] [Masters thesis]. University of Texas – Austin; 2020. [cited 2021 Mar 07].
Available from: http://dx.doi.org/10.26153/tsw/10925.
Council of Science Editors:
Zhong S. A review on constrained recurrent sparse auto-encoder. [Masters Thesis]. University of Texas – Austin; 2020. Available from: http://dx.doi.org/10.26153/tsw/10925

University of New Mexico
18.
Goudarzi, Alireza.
Theory and Practice of Computing with Excitable Dynamics.
Degree: Department of Computer Science, 2016, University of New Mexico
URL: https://digitalrepository.unm.edu/cs_etds/81
► Reservoir computing (RC) is a promising paradigm for time series processing. In this paradigm, the desired output is computed by combining measurements of an…
(more)
▼ Reservoir computing (RC) is a promising paradigm for time series processing. In this paradigm, the desired output is computed by combining measurements of an excitable system that responds to time-dependent exogenous stimuli. The excitable system is called a reservoir and measurements of its state are combined using a readout layer to produce a target output. The power of RC is attributed to an emergent short-term memory in dynamical systems and has been analyzed mathematically for both linear and nonlinear dynamical systems. The theory of RC treats only the macroscopic properties of the reservoir, without reference to the underlying medium it is made of. As a result, RC is particularly attractive for building computational devices using emerging technologies whose structure is not exactly controllable, such as self-assembled nanoscale circuits. RC has lacked a formal framework for performance analysis and prediction that goes beyond memory properties. To provide such a framework, here a mathematical theory of memory and information processing in ordered and disordered linear dynamical systems is developed. This theory analyzes the optimal readout layer for a given task. The focus of the theory is a standard model of RC, the echo state
network (ESN). An ESN consists of a fixed
recurrent neural network that is driven by an external signal. The dynamics of the
network is then combined linearly with readout weights to produce the desired output. The readout weights are calculated using linear regression.
Using an analysis of regression equations, the readout weights can be calculated using only the statistical properties of the reservoir dynamics, the input signal, and the desired output. The readout layer weights can be calculated from a priori knowledge of the desired function to be computed and the weight matrix of the reservoir. This formulation explicitly depends on the input weights, the reservoir weights, and the statistics of the target function. This formulation is used to bound the expected error of the system for a given target function. The effects of input-output correlation and complex
network structure in the reservoir on the computational performance of the system have been mathematically characterized. Far from the chaotic regime, ordered linear networks exhibit a homogeneous decay of memory in different dimensions, which keeps the input history coherent. As disorder is introduced in the structure of the
network, memory decay becomes inhomogeneous along different dimensions causing decoherence in the input history, and degradation in task-solving performance. Close to the chaotic regime, the ordered systems show loss of temporal information in the input history, and therefore inability to solve tasks. However, by introducing disorder and therefore heterogeneous decay of memory the temporal information of input history is preserved and the task-solving performance is recovered. Thus for systems at the edge of chaos, disordered structure may enhance temporal information processing. Although the…
Advisors/Committee Members: Darko Stefanovic, Christof Teuscher, Lance Williams, Melanie Moses.
Subjects/Keywords: reservoir computing; recurrent neural network; excitable dynamics; dynamical systems; Computer Sciences
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Goudarzi, A. (2016). Theory and Practice of Computing with Excitable Dynamics. (Doctoral Dissertation). University of New Mexico. Retrieved from https://digitalrepository.unm.edu/cs_etds/81
Chicago Manual of Style (16th Edition):
Goudarzi, Alireza. “Theory and Practice of Computing with Excitable Dynamics.” 2016. Doctoral Dissertation, University of New Mexico. Accessed March 07, 2021.
https://digitalrepository.unm.edu/cs_etds/81.
MLA Handbook (7th Edition):
Goudarzi, Alireza. “Theory and Practice of Computing with Excitable Dynamics.” 2016. Web. 07 Mar 2021.
Vancouver:
Goudarzi A. Theory and Practice of Computing with Excitable Dynamics. [Internet] [Doctoral dissertation]. University of New Mexico; 2016. [cited 2021 Mar 07].
Available from: https://digitalrepository.unm.edu/cs_etds/81.
Council of Science Editors:
Goudarzi A. Theory and Practice of Computing with Excitable Dynamics. [Doctoral Dissertation]. University of New Mexico; 2016. Available from: https://digitalrepository.unm.edu/cs_etds/81

University of Waterloo
19.
Ruvinov, Igor.
Recurrent Neural Network Dual Resistance Control of Multiple Memory Shape Memory Alloys.
Degree: 2018, University of Waterloo
URL: http://hdl.handle.net/10012/13647
► Shape memory alloys (SMAs) are materials with extraordinary thermomechanical properties which have caused numerous engineering advances. NiTi SMAs in particular have been studied for decades…
(more)
▼ Shape memory alloys (SMAs) are materials with extraordinary thermomechanical properties which have caused numerous engineering advances. NiTi SMAs in particular have been studied for decades revealing many useful characteristics relative to other SMA compositions. Their application has correspondingly been widespread, seeing use in the robotics, automotive, and aerospace industries, among others. Nevertheless, several limitations inherent to SMAs exist which inhibit their applicability, including their inherent single transformation temperature and their complex hysteretic actuation behaviour.
To overcome the former challenge, one method utilizes high energy laser processing to perform localized vaporization of nickel and accurately adjust its transformation temperatures. This method can reliably produce NiTi SMAs with multiple monolithic transformation memories. There have also been attempts to overcome the latter of the aforementioned challenges by designing systems which model NiTi's hysteretic behaviour. When applied to actuators with a single transformation memory, these methods require the use of external sensors for modeling actuators with varying current and load, driving up the cost, weight, and complexity of the actuator. Embedding a second transformation memory with different phase into NiTi actuators can overcome this issue. By measuring electrical resistance across the two phases, sufficient information can be extracted for differentiating events caused by heating from those caused by applied load. The current study examines NiTi wires with two embedded transformation memories and utilizes recurrent neural networks for interpreting the sensed data. The knowledge gained through this study was used to create a recurrent neural network-based model which can accurately estimate the position and force applied to the NiTi actuator without the use of external sensors.
The first part of the research focused on obtaining a comprehensive thermomechanical characterization of laser processed and thermomechanically post-processed NiTi wires with two embedded transformation memories, with one memory exhibiting full SME and the second partial PE at room temperature. A second objective of this section was to acquire cycling data from the processed wires which would be used for training the artificial neural networks in the following section of the study. The selected laser processing and post-processing parameters resulted in a transformation temperature increase of 61.5°C and 35.3°C for Af and Ms, respectively, relative to base metal. Furthermore, the post-processing was found to successfully restore the majority of the lost mechanical properties, with the ultimate tensile strength recovered to 84% of its corresponding base metal value. This research resulted in the fabrication of NiTi wires with two distinct embedded transformation memories, exhibiting sufficient mechanical and cyclic properties for the next phase of the research.
Once an acceptable amount of NiTi actuation cycling data was acquired, the second…
Subjects/Keywords: Recurrent neural network; Shape memory alloys; Artificial intelligence
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ruvinov, I. (2018). Recurrent Neural Network Dual Resistance Control of Multiple Memory Shape Memory Alloys. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/13647
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ruvinov, Igor. “Recurrent Neural Network Dual Resistance Control of Multiple Memory Shape Memory Alloys.” 2018. Thesis, University of Waterloo. Accessed March 07, 2021.
http://hdl.handle.net/10012/13647.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ruvinov, Igor. “Recurrent Neural Network Dual Resistance Control of Multiple Memory Shape Memory Alloys.” 2018. Web. 07 Mar 2021.
Vancouver:
Ruvinov I. Recurrent Neural Network Dual Resistance Control of Multiple Memory Shape Memory Alloys. [Internet] [Thesis]. University of Waterloo; 2018. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10012/13647.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ruvinov I. Recurrent Neural Network Dual Resistance Control of Multiple Memory Shape Memory Alloys. [Thesis]. University of Waterloo; 2018. Available from: http://hdl.handle.net/10012/13647
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
20.
Herzfeld, David James.
Modeling and Computational Framework for the Specification and Simulation of Large-scale Spiking Neural Networks.
Degree: 2011, Marquette University
URL: https://epublications.marquette.edu/theses_open/102
► Recurrently connected neural networks, in which synaptic connections between neurons can form directed cycles, have been used extensively in the literature to describe various neurophysiological…
(more)
▼ Recurrently connected
neural networks, in which synaptic connections between neurons can form directed cycles, have been used extensively in the literature to describe various neurophysiological phenomena, such as coordinate transformations during sensorimotor integration. Due to the directed cycles that can exist in
recurrent networks, there is no well-known way to a priori specify synaptic weights to elicit neuron spiking responses to stimuli based on available neurophysiology. Using a common mean field assumption, that synaptic inputs are uncorrelated for sufficiently large populations of neurons, we show that the connection topology and a neuron's response characteristics can be decoupled. This assumption allows specification of neuron steady-state responses independent of the connection topology.
Specification of neuron responses necessitates the creation of a novel simulator (computational framework) which allows modeling of large populations of connected spiking neurons. We describe the implementation of a spike-based computational framework, designed to take advantage of high performance computing architectures when available. We show that performance of the computational framework is improved using multiple message passing processes for large populations of neurons, resulting in a worst-case linear relationship between the number of neurons and the time required to complete a simulation.
Using the computational framework and the ability to specify neuron response characteristics independent of synaptic weights, we systematically investigate the effects of Hebbian learning on the hemodynamic response. Changes in the magnitude of the hemodynamic responses of
neural populations are assessed using a forward model that relates population synaptic currents to the blood oxygen dependant (BOLD) response via local field potentials. We show that the magnitude of the hemodynamic response is not a accurate indicator of underlying spiking activity for all
network topologies. Instead, we note that large changes in the aggregate response of the population (>50%) can results in a decrease in the overall magnitude of the BOLD signal. We hypothesize that the hemodynamic response magnitude changed due to fluctuations in the balance of excitatory and inhibitory inputs in
neural subpopulations. These results have important implications for mean-field models, suggesting that the underlying excitatory/inhibitory
neural dynamics within a population may need to be taken into account to accurately predict hemodynamic responses.
Advisors/Committee Members: Beardsley, Scott, Scheidt, Robert A., Struble, Craig A..
Subjects/Keywords: hemodynamic; neural network; recurrent; spiking; Biomedical Engineering and Bioengineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Herzfeld, D. J. (2011). Modeling and Computational Framework for the Specification and Simulation of Large-scale Spiking Neural Networks. (Thesis). Marquette University. Retrieved from https://epublications.marquette.edu/theses_open/102
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Herzfeld, David James. “Modeling and Computational Framework for the Specification and Simulation of Large-scale Spiking Neural Networks.” 2011. Thesis, Marquette University. Accessed March 07, 2021.
https://epublications.marquette.edu/theses_open/102.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Herzfeld, David James. “Modeling and Computational Framework for the Specification and Simulation of Large-scale Spiking Neural Networks.” 2011. Web. 07 Mar 2021.
Vancouver:
Herzfeld DJ. Modeling and Computational Framework for the Specification and Simulation of Large-scale Spiking Neural Networks. [Internet] [Thesis]. Marquette University; 2011. [cited 2021 Mar 07].
Available from: https://epublications.marquette.edu/theses_open/102.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Herzfeld DJ. Modeling and Computational Framework for the Specification and Simulation of Large-scale Spiking Neural Networks. [Thesis]. Marquette University; 2011. Available from: https://epublications.marquette.edu/theses_open/102
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Texas A&M University
21.
Wang, Han.
Dynamic Analysis of Recurrent Neural Networks.
Degree: PhD, Computer Science, 2020, Texas A&M University
URL: http://hdl.handle.net/1969.1/191907
► With the advancement in deep learning research, neural networks have become one of the most powerful tools for artificial intelligence tasks. More specifically, recurrent neural…
(more)
▼ With the advancement in deep learning research,
neural networks have become one of the most powerful tools for artificial intelligence tasks. More specifically,
recurrent neural networks (RNNs) have achieved state-of-the-art in tasks such as hand-writing recognition and speech recognition. Despite the success of
recurrent neural networks, how and why do
neural nets work is still not sufficiently investigated. My work on the dynamical analysis of
recurrent neural networks can help understand how the input features are extracted in the
recurrent layer, how the RNNs make decisions, and how the chaotic dynamics of RNNs affects its behaviors. Firstly, I investigated the dynamics of
recurrent neural networks as autonomous dynamical system in the experiment of a two-joint limb controlling task and compared the empirical result and the theoretical analysis. Secondly, I investigated the dynamics of non-autonomous
recurrent neural networks on two benchmark tasks: sequential MNIST recognition task and DNA splice junction classification task. How the hidden states of long-short term memory (LSTM) and gated
recurrent unit (GRU) cells learn new features and how the input sequence is extracted are demonstrated with experiments. Finally, based on the understanding of the external and internal dynamics of
recurrent units, I proposed several algorithms for
recurrent neural network compression. The algorithms demonstrate reasonable performance in compression ratio and are able to sustain the performance of the original models.
Advisors/Committee Members: Choe, Yoonsuck (advisor), Chaspari, Theodora (advisor), Hammond, Tracy (committee member), Lu, Mi (committee member).
Subjects/Keywords: Recurrent Neural Network; machine learning; deep learning; dynamical system
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, H. (2020). Dynamic Analysis of Recurrent Neural Networks. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/191907
Chicago Manual of Style (16th Edition):
Wang, Han. “Dynamic Analysis of Recurrent Neural Networks.” 2020. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021.
http://hdl.handle.net/1969.1/191907.
MLA Handbook (7th Edition):
Wang, Han. “Dynamic Analysis of Recurrent Neural Networks.” 2020. Web. 07 Mar 2021.
Vancouver:
Wang H. Dynamic Analysis of Recurrent Neural Networks. [Internet] [Doctoral dissertation]. Texas A&M University; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1969.1/191907.
Council of Science Editors:
Wang H. Dynamic Analysis of Recurrent Neural Networks. [Doctoral Dissertation]. Texas A&M University; 2020. Available from: http://hdl.handle.net/1969.1/191907
22.
Ellis, Robert.
Leveraging local and global word context for multi-label document classification.
Degree: 2020, Athabasca University
URL: http://hdl.handle.net/10791/334
► With the increasing volume of text documents, it is crucial to identify the themes and topics contained within. Labelling documents with the identified topics is…
(more)
▼ With the increasing volume of text documents, it is crucial to identify the themes and
topics contained within. Labelling documents with the identified topics is called multi-label classification. Interdependencies exist between not just words, but sentences and
paragraphs. These longer sequences and more complex relationships increase the label
identification challenge. Five novel deep neural networks are proposed and evaluated for
their performance classifying longer documents. The RCLNN applies the RCL to NLP,
combining that model with a CNN which has demonstrated success on short text. The
QRCNN similarly extends a CNN in addition to implementing it with a QRNN. The
remaining three models build on these base models, integrating them in a novel pseudo-Siamese approach. Experiments find QRCNN highest performing overall, with the
PSRCNNA model a close second, indicating that the pseudo-Siamese approach can be
performant when married with attention.
2020-11
Advisors/Committee Members: Dewan, Ali (Faculty of Science and Technology, School of Computing and Information Systems), Bagheri, Ebrahim (Ryerson University), Wen, Dunwei (Faculty of Science and Technology, School of Computing and Information Systems).
Subjects/Keywords: Recurrent; Convolutional; Neural network; Classification; Attention; Hierarchy; Ensemble; Siamese
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ellis, R. (2020). Leveraging local and global word context for multi-label document classification. (Thesis). Athabasca University. Retrieved from http://hdl.handle.net/10791/334
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ellis, Robert. “Leveraging local and global word context for multi-label document classification.” 2020. Thesis, Athabasca University. Accessed March 07, 2021.
http://hdl.handle.net/10791/334.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ellis, Robert. “Leveraging local and global word context for multi-label document classification.” 2020. Web. 07 Mar 2021.
Vancouver:
Ellis R. Leveraging local and global word context for multi-label document classification. [Internet] [Thesis]. Athabasca University; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10791/334.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ellis R. Leveraging local and global word context for multi-label document classification. [Thesis]. Athabasca University; 2020. Available from: http://hdl.handle.net/10791/334
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
23.
Lundström, Oscar.
Learning a Better Attitude: A Recurrent Neural Filter for Orientation Estimation
.
Degree: Chalmers tekniska högskola / Institutionen för mekanik och maritima vetenskaper, 2020, Chalmers University of Technology
URL: http://hdl.handle.net/20.500.12380/300918
► In the current paradigm of sensor fusion orientation estimation from inertial measurements unit sensor data is done using techniques derived with Bayesian statistics. These derivations…
(more)
▼ In the current paradigm of sensor fusion orientation estimation from inertial measurements
unit sensor data is done using techniques derived with Bayesian statistics.
These derivations are based on assumptions about noise distributions and hand
crafted equations describing the relevant system dynamics. Machine learning, and
more specifically neural networks, may provide an alternate solution to the problem
of orientation estimation where no assumptions or handcrafted relationships
are present. This thesis aims to investigate whether a neural network-based filter
can achieve a performance comparable to, or exceeding that of, the more conventional
extended Kalman filter. Two network architectures based on long short-term
memory layers are proposed, trained, evaluated and compared using data from the
Oxford inertial odometry dataset. Of the two suggested model architectures the socalled
recurrent neural filter is found to give a the better performance. The recurrent
neural filter has a structure inspired by Bayesian filtering, with a prediction and an
update step, allowing it to output a prediction in the event of missing data. Further,
the evaluated models are trained to estimate orientation as well as a parameterized
error covariance matrix. Our results show that the suggested recurrent neural filter
outperforms the benchmark filter both in average root mean square error and in
execution time. The result also indicates that the machine learning-based approach
for sensor fusion problems may be an attractive alternative to hand crafted filters
in the future.
Subjects/Keywords: sensor-fusion;
state estimation;
absolute orientation estimation;
recurrent neural filter;
recurrent neural network;
RNN;
LSTM;
IMU;
MARG
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lundström, O. (2020). Learning a Better Attitude: A Recurrent Neural Filter for Orientation Estimation
. (Thesis). Chalmers University of Technology. Retrieved from http://hdl.handle.net/20.500.12380/300918
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Lundström, Oscar. “Learning a Better Attitude: A Recurrent Neural Filter for Orientation Estimation
.” 2020. Thesis, Chalmers University of Technology. Accessed March 07, 2021.
http://hdl.handle.net/20.500.12380/300918.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Lundström, Oscar. “Learning a Better Attitude: A Recurrent Neural Filter for Orientation Estimation
.” 2020. Web. 07 Mar 2021.
Vancouver:
Lundström O. Learning a Better Attitude: A Recurrent Neural Filter for Orientation Estimation
. [Internet] [Thesis]. Chalmers University of Technology; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/20.500.12380/300918.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Lundström O. Learning a Better Attitude: A Recurrent Neural Filter for Orientation Estimation
. [Thesis]. Chalmers University of Technology; 2020. Available from: http://hdl.handle.net/20.500.12380/300918
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
24.
Oskarsson, Gustav.
Aktieprediktion med neurala nätverk : En jämförelse av statistiska modeller, neurala nätverk och kombinerade neurala nätverk.
Degree: 2019, , Department of Industrial Economics
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18214
► This study is about prediction of the stockmarket through a comparison of neural networks and statistical models. The study aims to improve the accuracy…
(more)
▼ This study is about prediction of the stockmarket through a comparison of neural networks and statistical models. The study aims to improve the accuracy of stock prediction. Much of the research made on predicting shares deals with statistical models, but also neural networks and then mainly the types RNN and CNN. No research has been done on how these neural networks can be combined, which is why this study aims for this. Tests are made on statistical models, neural networks and combined neural networks to predict stocks at minute level. The result shows that a combination of two neural networks of type RNN gives the best accuracy in the prediction of shares. The accuracy of the predictions increases further if these combined neural networks are trained to predict different time horizons. In addition to tests for accuracy, simulations have also been made which also confirm that there is some possibility to predict shares. Two combined RNNs gave best results, but in the simulations, even CNN made good predictions. One conclusion can be drawn that the stock market is not entirely effective as some opportunity to predict future values exists. Another conclusion is that neural networks are better than statistical models to predict stocks if the neural networks are combined and are of type RNN.
Denna studie behandlar prediktion av aktier genom en jämförelse av neurala nätverk och statistiska modeller. Studien syftar till att förbättra noggrannheten för aktieprediktion. Mycket av den forskning som gjorts om att förutspå aktier behandlar statistiska modeller, men även neurala nätverk och då främst typerna RNN och CNN. Ingen forskning har dock gjorts på hur dessa neurala nätverk kan kombineras, varför denna studie syftar till just detta. Tester är gjorda på statistiska modeller, neurala nätverk och kombinerade neurala nätverk för att förutspå aktier på minutnivå. Resultatet visar att en kombination av två neurala nätverk av typen RNN ger bäst noggrannhet vid prediktion av aktier. Noggrannheten i prediktionerna ökar ytterligare om dessa neurala nätverk tränas för att förutspå olika tidshorisont. Utöver tester för prediktionernas noggrannhet har även simuleringar genomförts som även de bekräftar att viss möjlighet finns att förutspå aktier. Två kombinerade RNN gav bra resultat, men här visade även CNN bra prediktioner. En slutsats kan dras om att aktiemarknaden inte är helt effektiv då viss möjlighet att förutspå framtida värden finns. Ytterligare en slutsats är att neurala nätverk är bättre än statistiska modeller till att förutspå aktier om de neurala nätverken kombineras och är av typen RNN.
Subjects/Keywords: neural network; stock market; recurrent neural network; neuralt nätverk; aktiemarknad; recurrent neural network; Other Engineering and Technologies not elsewhere specified; Övrig annan teknik
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Oskarsson, G. (2019). Aktieprediktion med neurala nätverk : En jämförelse av statistiska modeller, neurala nätverk och kombinerade neurala nätverk. (Thesis). , Department of Industrial Economics. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18214
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Oskarsson, Gustav. “Aktieprediktion med neurala nätverk : En jämförelse av statistiska modeller, neurala nätverk och kombinerade neurala nätverk.” 2019. Thesis, , Department of Industrial Economics. Accessed March 07, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18214.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Oskarsson, Gustav. “Aktieprediktion med neurala nätverk : En jämförelse av statistiska modeller, neurala nätverk och kombinerade neurala nätverk.” 2019. Web. 07 Mar 2021.
Vancouver:
Oskarsson G. Aktieprediktion med neurala nätverk : En jämförelse av statistiska modeller, neurala nätverk och kombinerade neurala nätverk. [Internet] [Thesis]. , Department of Industrial Economics; 2019. [cited 2021 Mar 07].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18214.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Oskarsson G. Aktieprediktion med neurala nätverk : En jämförelse av statistiska modeller, neurala nätverk och kombinerade neurala nätverk. [Thesis]. , Department of Industrial Economics; 2019. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18214
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Oklahoma State University
25.
Phan, Manh Cong.
Recurrent Neural Networks: Error Surface Analysis and Improved Training.
Degree: Electrical Engineering, 2014, Oklahoma State University
URL: http://hdl.handle.net/11244/15063
► Recurrent neural networks (RNNs) have powerful computational abilities and could be used in a variety of applications; however, training these networks is still a difficult…
(more)
▼ Recurrent neural networks (RNNs) have powerful computational abilities and could be used in a variety of applications; however, training these networks is still a difficult problem. One of the reasons that makes RNN training, especially using batch, gradient-based methods, difficult is the existence of spurious valleys in the error surface. In this work, a mathematical framework was developed to analyze the spurious valleys that appear in most practical RNN architectures, no matter their size. The insights gained from this analysis suggested a new procedure for improving the training process. The procedure uses a batch training method based on a modified version of the Levenberg-Marquardt algorithm. This new procedure mitigates the effects of spurious valleys in the error surface of RNNs. The results on a variety of test problems show that the new procedure is consistently better than existing training algorithms (both batch and stochastic) for training RNNs. Also, a framework for
neural network controllers based on the model reference adaptive control (MRAC) architecture was developed. This architecture has been used before, but the difficulties in training RNNs have limited its use. The new training procedures have made MRAC more attractive. The updated MRAC framework is very flexible, and incorporates disturbance rejection, regulation and tracking. The simulation and testing results on several real systems show that this type of
neural control is well-suited for highly nonlinear plants.
Advisors/Committee Members: Hagan, Martin T. (advisor), Latino, Carl (committee member), Hutchens, Chris (committee member), Kable, Anthony (committee member).
Subjects/Keywords: error surface; neural control; recurrent neural network; spurious valley; system identification; training
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Phan, M. C. (2014). Recurrent Neural Networks: Error Surface Analysis and Improved Training. (Thesis). Oklahoma State University. Retrieved from http://hdl.handle.net/11244/15063
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Phan, Manh Cong. “Recurrent Neural Networks: Error Surface Analysis and Improved Training.” 2014. Thesis, Oklahoma State University. Accessed March 07, 2021.
http://hdl.handle.net/11244/15063.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Phan, Manh Cong. “Recurrent Neural Networks: Error Surface Analysis and Improved Training.” 2014. Web. 07 Mar 2021.
Vancouver:
Phan MC. Recurrent Neural Networks: Error Surface Analysis and Improved Training. [Internet] [Thesis]. Oklahoma State University; 2014. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/11244/15063.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Phan MC. Recurrent Neural Networks: Error Surface Analysis and Improved Training. [Thesis]. Oklahoma State University; 2014. Available from: http://hdl.handle.net/11244/15063
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Louisiana State University
26.
Firth, Robert James.
A Novel Recurrent Convolutional Neural Network for Ocean and Weather Forecasting.
Degree: PhD, Computer Sciences, 2016, Louisiana State University
URL: etd-04112016-151259
;
https://digitalcommons.lsu.edu/gradschool_dissertations/2099
► Numerical weather prediction is a computationally expensive task that requires not only the numerical solution to a complex set of non-linear partial differential equations, but…
(more)
▼ Numerical weather prediction is a computationally expensive task that requires not only the numerical solution to a complex set of non-linear partial differential equations, but also the creation of a parameterization scheme to estimate sub-grid scale phenomenon. The proposed method is an alternative approach to developing a mesoscale meteorological model – a modified recurrent convolutional neural network that learns to simulate the solution to these equations. Along with an appropriate time integration scheme and learning algorithm, this method can be used to create multi-day forecasts for a large region. The learning method presented is an extended form of Backpropagation Through Time for a recurrent network with outputs that feed back through as inputs only after undergoing a fixed transformation. An initial implementation of this approach has been created that forecasts for 2,744 locations across the southeastern United States at 36 vertical levels of the atmosphere, and 119,000 locations across the Atlantic Ocean at 39 vertical levels. These models, called LM3 and LOM, forecast wind speed, temperature, geopotential height, and rainfall for weather forecasting and water current speed, temperature, and salinity for ocean forecasting. Experimental results show that the new approach is 3.6 times more efficient at forecasting the ocean and 16 times more efficient at forecasting the atmosphere. The new approach showed forecast skill by beating the accuracy of two models, persistence and climatology, and was more accurate than the Navy NCOM model on 16 of the first 17 layers of the ocean below the surface (2 meters to 70 meters) for forecasting salinity and 15 of the first 17 layers for forecasting temperature. The new approach was also more accurate than the RAP model at forecasting wind speed on 7 layers, specific humidity on 7 layers, relative humidity on 6 layers, and temperature on 3 layers, with competitive results elsewhere.
Subjects/Keywords: time series; spatial; temporal; time step network; convolutional; recurrent; neural network; weather forecasting; ocean forecasting
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Firth, R. J. (2016). A Novel Recurrent Convolutional Neural Network for Ocean and Weather Forecasting. (Doctoral Dissertation). Louisiana State University. Retrieved from etd-04112016-151259 ; https://digitalcommons.lsu.edu/gradschool_dissertations/2099
Chicago Manual of Style (16th Edition):
Firth, Robert James. “A Novel Recurrent Convolutional Neural Network for Ocean and Weather Forecasting.” 2016. Doctoral Dissertation, Louisiana State University. Accessed March 07, 2021.
etd-04112016-151259 ; https://digitalcommons.lsu.edu/gradschool_dissertations/2099.
MLA Handbook (7th Edition):
Firth, Robert James. “A Novel Recurrent Convolutional Neural Network for Ocean and Weather Forecasting.” 2016. Web. 07 Mar 2021.
Vancouver:
Firth RJ. A Novel Recurrent Convolutional Neural Network for Ocean and Weather Forecasting. [Internet] [Doctoral dissertation]. Louisiana State University; 2016. [cited 2021 Mar 07].
Available from: etd-04112016-151259 ; https://digitalcommons.lsu.edu/gradschool_dissertations/2099.
Council of Science Editors:
Firth RJ. A Novel Recurrent Convolutional Neural Network for Ocean and Weather Forecasting. [Doctoral Dissertation]. Louisiana State University; 2016. Available from: etd-04112016-151259 ; https://digitalcommons.lsu.edu/gradschool_dissertations/2099

The Ohio State University
27.
Zheng, Yilin.
Text-Based Speech Video Synthesis from a Single Face
Image.
Degree: MS, Electrical and Computer Engineering, 2019, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1572168353691788
► Speech video synthesis is a task to generate talking characters which look realistic to human evaluators. Previously, most of the studies used animation models and…
(more)
▼ Speech video synthesis is a task to generate talking
characters which look realistic to human evaluators. Previously,
most of the studies used animation models and required speech audio
as input. Recent advances in Generative Adversarial Networks (GAN)
have made the modification of real person images possible. Instead
of using audio signals which already have temporal information, in
this work we use text directly. Our system has three modules:
First, through an encoder-decoder
Recurrent Neural Network (RNN)
and a new training scheme, we transfer the text into Action Unit
(AU) activation intensities and 3D head movements. Second, using a
conditional GAN, we synthesize new images with facial configuration
corresponding to AU activations. Third, 3D rotated images with the
corresponding head movements are generated to help improve the
visualization.
Advisors/Committee Members: Martinez, Aleix (Advisor).
Subjects/Keywords: Computer Science; Computer Engineering; Face Image Synthesis, Generative Adversarial Network,
Recurrent Neural Network
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zheng, Y. (2019). Text-Based Speech Video Synthesis from a Single Face
Image. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1572168353691788
Chicago Manual of Style (16th Edition):
Zheng, Yilin. “Text-Based Speech Video Synthesis from a Single Face
Image.” 2019. Masters Thesis, The Ohio State University. Accessed March 07, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1572168353691788.
MLA Handbook (7th Edition):
Zheng, Yilin. “Text-Based Speech Video Synthesis from a Single Face
Image.” 2019. Web. 07 Mar 2021.
Vancouver:
Zheng Y. Text-Based Speech Video Synthesis from a Single Face
Image. [Internet] [Masters thesis]. The Ohio State University; 2019. [cited 2021 Mar 07].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1572168353691788.
Council of Science Editors:
Zheng Y. Text-Based Speech Video Synthesis from a Single Face
Image. [Masters Thesis]. The Ohio State University; 2019. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1572168353691788

Delft University of Technology
28.
Voss, Sander (author).
Application of Deep Learning for Spacecraft Fault Detection and Isolation.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:7c308a4b-f97b-4a83-b739-4019ad306853
► Spacecraft require high availability, autonomous operation, and a high degree of mission success. Spacecraft use sensors, such as star trackers and GPS, and actuators, such…
(more)
▼ Spacecraft require high availability, autonomous operation, and a high degree of mission success. Spacecraft use sensors, such as star trackers and GPS, and actuators, such as reaction wheels, to reach and maintain a correct attitude and position. Failures in these components will have a significant negative impact on the success of the mission, or may even cause total loss of mission. Fault Detection, Isolation and Recovery (FDIR) aims to detect and isolates faults and recover them before they develop into failures. This makes it an important factor in the success of a satellite’s mission. It is also a determining factor in the level of autonomy if a system does not require ground intervention to perform FDIR. Development of FDIR methods is a difficult task, of which the success depends largely on the knowledge of the system and suffers under noisy environments. This research aims to explore the use of Deep Learning for fault detection and isolation in spacecraft. In a case study the proposed method is used to detect and isolate reaction wheel, GPS, Star Tracker, and magnetometer faults as well as two simultaneous faults. The research suggest successful classification of faults in the star trackers, GPS and magnetometers but a lack in performance in misalignment faults. Tachometer faults are often not isolated to the correct wheel. There is a high degree of false alarms and missed detection and preliminary results suggest separating detection and isolation may resolve this. Dataset size has also been shown to be a large contributor to the accuracy and above all loss performance of the network.
Aerospace Engineering
Advisors/Committee Members: Guo, Jian (mentor), Delft University of Technology (degree granting institution).
Subjects/Keywords: FDI; FDIR; Fault Detection; Fault Isolation; Deep Learning; Recurrent networks; Recurrent Neural Network; long short-term memory networks; LSTM
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Voss, S. (. (2019). Application of Deep Learning for Spacecraft Fault Detection and Isolation. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:7c308a4b-f97b-4a83-b739-4019ad306853
Chicago Manual of Style (16th Edition):
Voss, Sander (author). “Application of Deep Learning for Spacecraft Fault Detection and Isolation.” 2019. Masters Thesis, Delft University of Technology. Accessed March 07, 2021.
http://resolver.tudelft.nl/uuid:7c308a4b-f97b-4a83-b739-4019ad306853.
MLA Handbook (7th Edition):
Voss, Sander (author). “Application of Deep Learning for Spacecraft Fault Detection and Isolation.” 2019. Web. 07 Mar 2021.
Vancouver:
Voss S(. Application of Deep Learning for Spacecraft Fault Detection and Isolation. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Mar 07].
Available from: http://resolver.tudelft.nl/uuid:7c308a4b-f97b-4a83-b739-4019ad306853.
Council of Science Editors:
Voss S(. Application of Deep Learning for Spacecraft Fault Detection and Isolation. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:7c308a4b-f97b-4a83-b739-4019ad306853

Brno University of Technology
29.
Huf, Petr.
Machine Learning Strategies in Electronic Trading: Machine Learning Strategies in Electronic Trading.
Degree: 2019, Brno University of Technology
URL: http://hdl.handle.net/11012/56492
► Successful stock trading is a dream of many people. Eletronic trading is an interesting branch of this business. The trading strategy runs on the computer…
(more)
▼ Successful stock trading is a dream of many people. Eletronic trading is an interesting branch of this business. The trading strategy runs on the computer all the time without any human intervention. This way of trading provides a lot of free time and high earnings. This thesis is aimed at usage of
neural networks in building this type of trading strategy. An already existing
recurrent neural network was used as a basis and was modified for the needs of trading. The result is a
neural network which predicts future market moves. The trading strategy based on this
neural network is able to perform a successful trading.
Advisors/Committee Members: Černocký, Jan (advisor), Kolář, Martin (referee).
Subjects/Keywords: neuronová síť; rekurentní neuronová síť; burza; obchodování; trh; profit; model; neural network; recurrent neural network; stock exchange; trading; market; profit; model
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Huf, P. (2019). Machine Learning Strategies in Electronic Trading: Machine Learning Strategies in Electronic Trading. (Thesis). Brno University of Technology. Retrieved from http://hdl.handle.net/11012/56492
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Huf, Petr. “Machine Learning Strategies in Electronic Trading: Machine Learning Strategies in Electronic Trading.” 2019. Thesis, Brno University of Technology. Accessed March 07, 2021.
http://hdl.handle.net/11012/56492.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Huf, Petr. “Machine Learning Strategies in Electronic Trading: Machine Learning Strategies in Electronic Trading.” 2019. Web. 07 Mar 2021.
Vancouver:
Huf P. Machine Learning Strategies in Electronic Trading: Machine Learning Strategies in Electronic Trading. [Internet] [Thesis]. Brno University of Technology; 2019. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/11012/56492.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Huf P. Machine Learning Strategies in Electronic Trading: Machine Learning Strategies in Electronic Trading. [Thesis]. Brno University of Technology; 2019. Available from: http://hdl.handle.net/11012/56492
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
30.
이, 유림.
Chronic Kidney Disease Risk Prediction using Electronic Health Records Pattern Information based on Deep Learning.
Degree: 2019, Ajou University
URL: http://repository.ajou.ac.kr/handle/201003/17869
;
http://dcoll.ajou.ac.kr:9080/dcollection/jsp/common/DcLoOrgPer.jsp?sItemId=000000028811
► According to the National Health Insurance Service(NHIS) in 2013, 3.9% of adults aged 30 years or older have chronic kidney disease, and 16.5% are over…
(more)
▼ According to the National Health Insurance Service(NHIS) in 2013, 3.9% of adults aged 30 years or older have chronic kidney disease, and 16.5% are over 65 years old. In particular, renal insufficiency causes many complications in the onset of the disease, which causes many patients to die as their diseases worsen. Nevertheless, the initial symptoms are not clear, so it is rare that patients feel a kidney abnormality in the patient 's position and want to enter the hospital. Studies using electronic health records (EHR) have been conducted to detect these chronic kidney diseases early. Recently, as the technology of deep learning has rapidly developed, it has been actively researched and utilized in the medical field and has shown good performance. In addition, various experiments have been made to modify the model structure of the deep learning, and there have been cases in which performance varies depending on the structure. However, there are relatively few cases of studies for the prediction of chronic kidney disease in the medical field by experimenting with various types of deep learning model structures. In this paper, we evaluated the risk of chronic kidney disease using diagnostic and prescription information of EHR data, and compared the performance of several structural deep learning models. We extracted the weights of the learned data and confirmed the information of the time points with high weight in the prediction of chronic kidney disease. The results of this study were as follows: National Health Insurance Corporation sample DB, 81.09% of Accuracy, 87.75% of the area under the Receiver Operating Characteristics Curve (AUROC), 52.72% of the Area under the Precision-Recall Curve (AUPRC) and 83.03% of weighted F1-score. The accuracy of the database of Ajou University Hospital was 82.07%, 88.24% of AUROC, 63.61% of AUPRC, 82.93% of weighted F1-score. Based on this, it is expected that the proposed model will effectively contribute to the early detection, delay, and reduction the prevalence of chronic kidney disease.
2013년도 질병관리본부의 국민건강통계에 따르면 30세 이상의 성인 중 3.9%가 만성 신질환(chronic kidney disease)을 가지고 있으며, 65세 이상에서는 16.5%로 유병률이 매우 높은 질환이다. 특히 발병 과정에서 신부전증으로 인해 여러 가지 합병증들이 유발되는데, 해당 질병들이 악화되면서 많은 환자들을 사망에 이르게 만든다. 그럼에도 불구하고 초기 증상이 뚜렷하지 않아 환자가 신장 이상을 느끼고 병원에 내원하는 경우는 드문 실정이다. 이러한 만성 신질환을 조기에 예측하고자 전자의무기록(electronic health record, EHR)을 이용한 연구들이 선행되고 있다. 최근 딥러닝의 기술이 급속하게 발전하면서 의료 분야에서도 활발한 연구가 진행되고 있으며 기존의 전통적인 기계학습(Machine Learning)보다 좋은 성능을 보이고 있다. 또한, 딥러닝 분야에서 다양한 모델 구조 실험들이 이루어지고 있으며 구조에 따라 성능이 달라지는 연구 사례들이 있었다. 그러나 의료분야에서 다양한 딥러닝 모델 구조를 실험하고 만성 신질환 예측을 위한 연구 사례는 비교적 적었다. 이에 본 논문에서는 이에이치알(EHR) 데이터 중 진단과 처방 정보를 활용한 만성 신질환 위험도를 예측하였으며 여러 가지 구조의 딥러닝 모델에 따른 성능을 비교 평가하였다. 학습된 데이터의 가중치를 추출하여 만성 신질환 예측에 높은 수치를 가진 시점들의 정보들을 확인하여 정성적인 평가를 하였다. 실험 결과 국민건강보험공단 표본연구 DB에 대하여 정확도(accuracy) 81.09%, 에이유알오씨(The Area under the Receiver Operating Characteristics Curve, AUROC) 87.75%, 에이유피알씨(The Area under the Precision-Recall Curve, AUPRC) 52.72%, 웨이티드 에프원 스코어(Weighted F1-score) 83.03%를 나타냈고, 아주대학교병원 데이터베이스에 대해서 정확도 82.07%,…
Advisors/Committee Members: 대학원 의학과, 201724667, 이, 유림.
Subjects/Keywords: Electronic Health Records; Convolutional Neural Network; Recurrent Neural Network; Embedding; Attention; 전자의무기록; 순환 신경망; 합성곱 신경망; 임베딩 기법; 어텐션 메커니즘
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
이, . (2019). Chronic Kidney Disease Risk Prediction using Electronic Health Records Pattern Information based on Deep Learning. (Thesis). Ajou University. Retrieved from http://repository.ajou.ac.kr/handle/201003/17869 ; http://dcoll.ajou.ac.kr:9080/dcollection/jsp/common/DcLoOrgPer.jsp?sItemId=000000028811
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
이, 유림. “Chronic Kidney Disease Risk Prediction using Electronic Health Records Pattern Information based on Deep Learning.” 2019. Thesis, Ajou University. Accessed March 07, 2021.
http://repository.ajou.ac.kr/handle/201003/17869 ; http://dcoll.ajou.ac.kr:9080/dcollection/jsp/common/DcLoOrgPer.jsp?sItemId=000000028811.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
이, 유림. “Chronic Kidney Disease Risk Prediction using Electronic Health Records Pattern Information based on Deep Learning.” 2019. Web. 07 Mar 2021.
Vancouver:
이 . Chronic Kidney Disease Risk Prediction using Electronic Health Records Pattern Information based on Deep Learning. [Internet] [Thesis]. Ajou University; 2019. [cited 2021 Mar 07].
Available from: http://repository.ajou.ac.kr/handle/201003/17869 ; http://dcoll.ajou.ac.kr:9080/dcollection/jsp/common/DcLoOrgPer.jsp?sItemId=000000028811.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
이 . Chronic Kidney Disease Risk Prediction using Electronic Health Records Pattern Information based on Deep Learning. [Thesis]. Ajou University; 2019. Available from: http://repository.ajou.ac.kr/handle/201003/17869 ; http://dcoll.ajou.ac.kr:9080/dcollection/jsp/common/DcLoOrgPer.jsp?sItemId=000000028811
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
◁ [1] [2] [3] [4] [5] [6] ▶
.