Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Adversarial robustness). Showing records 1 – 9 of 9 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


University of Waterloo

1. Wu, Kaiwen. Wasserstein Adversarial Robustness.

Degree: 2020, University of Waterloo

 Deep models, while being extremely flexible and accurate, are surprisingly vulnerable to ``small, imperceptible'' perturbations known as adversarial attacks. While the majority of existing attacks… (more)

Subjects/Keywords: Wasserstein distance; adversarial robustness; optimization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wu, K. (2020). Wasserstein Adversarial Robustness. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/16345

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Wu, Kaiwen. “Wasserstein Adversarial Robustness.” 2020. Thesis, University of Waterloo. Accessed March 08, 2021. http://hdl.handle.net/10012/16345.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Wu, Kaiwen. “Wasserstein Adversarial Robustness.” 2020. Web. 08 Mar 2021.

Vancouver:

Wu K. Wasserstein Adversarial Robustness. [Internet] [Thesis]. University of Waterloo; 2020. [cited 2021 Mar 08]. Available from: http://hdl.handle.net/10012/16345.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Wu K. Wasserstein Adversarial Robustness. [Thesis]. University of Waterloo; 2020. Available from: http://hdl.handle.net/10012/16345

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Utah State University

2. Smith, Joshua. Formal Verification of the Adversarial Robustness Property of Deep Neural Networks Through Dimension Reduction Heuristics, Refutation-based Abstraction, and Partitioning.

Degree: MS, Electrical and Computer Engineering, 2020, Utah State University

  Neural networks are tools that are often used to perform functions such as object recognition in images, speech-to-text, and general data classification. Because neural… (more)

Subjects/Keywords: neural networks; formal verification; adversarial robustness; abstraction; adversarial example generation; Electrical and Computer Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Smith, J. (2020). Formal Verification of the Adversarial Robustness Property of Deep Neural Networks Through Dimension Reduction Heuristics, Refutation-based Abstraction, and Partitioning. (Masters Thesis). Utah State University. Retrieved from https://digitalcommons.usu.edu/etd/7934

Chicago Manual of Style (16th Edition):

Smith, Joshua. “Formal Verification of the Adversarial Robustness Property of Deep Neural Networks Through Dimension Reduction Heuristics, Refutation-based Abstraction, and Partitioning.” 2020. Masters Thesis, Utah State University. Accessed March 08, 2021. https://digitalcommons.usu.edu/etd/7934.

MLA Handbook (7th Edition):

Smith, Joshua. “Formal Verification of the Adversarial Robustness Property of Deep Neural Networks Through Dimension Reduction Heuristics, Refutation-based Abstraction, and Partitioning.” 2020. Web. 08 Mar 2021.

Vancouver:

Smith J. Formal Verification of the Adversarial Robustness Property of Deep Neural Networks Through Dimension Reduction Heuristics, Refutation-based Abstraction, and Partitioning. [Internet] [Masters thesis]. Utah State University; 2020. [cited 2021 Mar 08]. Available from: https://digitalcommons.usu.edu/etd/7934.

Council of Science Editors:

Smith J. Formal Verification of the Adversarial Robustness Property of Deep Neural Networks Through Dimension Reduction Heuristics, Refutation-based Abstraction, and Partitioning. [Masters Thesis]. Utah State University; 2020. Available from: https://digitalcommons.usu.edu/etd/7934

3. ZHANG JINGFENG. ROBUST LEARNING AND PREDICTION IN DEEP LEARNING.

Degree: 2020, National University of Singapore

Subjects/Keywords: Deep learning; training robustness; adversarial robustness

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

JINGFENG, Z. (2020). ROBUST LEARNING AND PREDICTION IN DEEP LEARNING. (Thesis). National University of Singapore. Retrieved from https://scholarbank.nus.edu.sg/handle/10635/186364

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

JINGFENG, ZHANG. “ROBUST LEARNING AND PREDICTION IN DEEP LEARNING.” 2020. Thesis, National University of Singapore. Accessed March 08, 2021. https://scholarbank.nus.edu.sg/handle/10635/186364.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

JINGFENG, ZHANG. “ROBUST LEARNING AND PREDICTION IN DEEP LEARNING.” 2020. Web. 08 Mar 2021.

Vancouver:

JINGFENG Z. ROBUST LEARNING AND PREDICTION IN DEEP LEARNING. [Internet] [Thesis]. National University of Singapore; 2020. [cited 2021 Mar 08]. Available from: https://scholarbank.nus.edu.sg/handle/10635/186364.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

JINGFENG Z. ROBUST LEARNING AND PREDICTION IN DEEP LEARNING. [Thesis]. National University of Singapore; 2020. Available from: https://scholarbank.nus.edu.sg/handle/10635/186364

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Waterloo

4. Jeddi, Ahmadreza. Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness.

Degree: 2020, University of Waterloo

 Deep neural networks have been achieving state-of-the-art performance across a wide variety of applications, and due to their outstanding performance, they are being deployed in… (more)

Subjects/Keywords: computer vision; machine learning; adversarial robustness; trustable ai

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jeddi, A. (2020). Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/16132

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Jeddi, Ahmadreza. “Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness.” 2020. Thesis, University of Waterloo. Accessed March 08, 2021. http://hdl.handle.net/10012/16132.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Jeddi, Ahmadreza. “Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness.” 2020. Web. 08 Mar 2021.

Vancouver:

Jeddi A. Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness. [Internet] [Thesis]. University of Waterloo; 2020. [cited 2021 Mar 08]. Available from: http://hdl.handle.net/10012/16132.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Jeddi A. Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness. [Thesis]. University of Waterloo; 2020. Available from: http://hdl.handle.net/10012/16132

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Halmstad University

5. Uličný, Matej. Methods for Increasing Robustness of Deep Convolutional Neural Networks.

Degree: Information Technology, 2015, Halmstad University

  Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep neural networks seem vulnerable to small amounts of non-random noise,… (more)

Subjects/Keywords: adversarial examples; deep neural network; noise robustness; Computer Sciences; Datavetenskap (datalogi)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Uličný, M. (2015). Methods for Increasing Robustness of Deep Convolutional Neural Networks. (Thesis). Halmstad University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-29734

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Uličný, Matej. “Methods for Increasing Robustness of Deep Convolutional Neural Networks.” 2015. Thesis, Halmstad University. Accessed March 08, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-29734.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Uličný, Matej. “Methods for Increasing Robustness of Deep Convolutional Neural Networks.” 2015. Web. 08 Mar 2021.

Vancouver:

Uličný M. Methods for Increasing Robustness of Deep Convolutional Neural Networks. [Internet] [Thesis]. Halmstad University; 2015. [cited 2021 Mar 08]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-29734.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Uličný M. Methods for Increasing Robustness of Deep Convolutional Neural Networks. [Thesis]. Halmstad University; 2015. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-29734

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Vanderbilt University

6. -4180-995X. Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions.

Degree: MS, Computer Science, 2020, Vanderbilt University

 The ability of deep neural networks (DNNs) to achieve state-of-the-art performance on complicated tasks has increased their adoption in safety-critical cyber physical systems (CPS). However,… (more)

Subjects/Keywords: Neural Network Robustness; Anomalies; Adversarial Attacks; Poisoning Attacks; Radial Basis Functions

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-4180-995X. (2020). Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions. (Masters Thesis). Vanderbilt University. Retrieved from http://hdl.handle.net/1803/10082

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-4180-995X. “Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions.” 2020. Masters Thesis, Vanderbilt University. Accessed March 08, 2021. http://hdl.handle.net/1803/10082.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-4180-995X. “Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions.” 2020. Web. 08 Mar 2021.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-4180-995X. Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions. [Internet] [Masters thesis]. Vanderbilt University; 2020. [cited 2021 Mar 08]. Available from: http://hdl.handle.net/1803/10082.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-4180-995X. Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions. [Masters Thesis]. Vanderbilt University; 2020. Available from: http://hdl.handle.net/1803/10082

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


Brno University of Technology

7. Gaňo, Martin. Improving Robustness of Neural Networks against Adversarial Examples: Improving Robustness of Neural Networks against Adversarial Examples.

Degree: 2020, Brno University of Technology

 This work discusses adversarial attacks to image classifier neural network models. Our goal is to summarize and demonstrate adversarial methods to show that they pose… (more)

Subjects/Keywords: Neuronové sítě; Optimalizace; Strojové učení; Kontradiktorní útok; Kontradiktorní vzorek; Robustnost; Kontradiktorní strojové učení; Neural networks; Optimization; Machine learning; Adversarial attack; Adversarial examples; Robustness; Adversarial machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gaňo, M. (2020). Improving Robustness of Neural Networks against Adversarial Examples: Improving Robustness of Neural Networks against Adversarial Examples. (Thesis). Brno University of Technology. Retrieved from http://hdl.handle.net/11012/191713

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Gaňo, Martin. “Improving Robustness of Neural Networks against Adversarial Examples: Improving Robustness of Neural Networks against Adversarial Examples.” 2020. Thesis, Brno University of Technology. Accessed March 08, 2021. http://hdl.handle.net/11012/191713.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Gaňo, Martin. “Improving Robustness of Neural Networks against Adversarial Examples: Improving Robustness of Neural Networks against Adversarial Examples.” 2020. Web. 08 Mar 2021.

Vancouver:

Gaňo M. Improving Robustness of Neural Networks against Adversarial Examples: Improving Robustness of Neural Networks against Adversarial Examples. [Internet] [Thesis]. Brno University of Technology; 2020. [cited 2021 Mar 08]. Available from: http://hdl.handle.net/11012/191713.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Gaňo M. Improving Robustness of Neural Networks against Adversarial Examples: Improving Robustness of Neural Networks against Adversarial Examples. [Thesis]. Brno University of Technology; 2020. Available from: http://hdl.handle.net/11012/191713

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Delft University of Technology

8. Lelekas, Ioannis (author). Top-Down Networks: A coarse-to-fine reimagination of CNNs.

Degree: 2020, Delft University of Technology

 Biological vision adopts a coarse-to-fine information processing pathway, from initial visual detection and binding of salient features of a visual scene, to the enhanced and… (more)

Subjects/Keywords: Computer Vision; Deep Learning; Convolutional Neural Networks; Top-Down; Fine-to-Coarse; Coarse-to-Fine; Adversarial attacks; Adversarial robustness; Gradcam; Object localization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lelekas, I. (. (2020). Top-Down Networks: A coarse-to-fine reimagination of CNNs. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:11888a7b-1e54-424d-9daa-8ff48de58345

Chicago Manual of Style (16th Edition):

Lelekas, Ioannis (author). “Top-Down Networks: A coarse-to-fine reimagination of CNNs.” 2020. Masters Thesis, Delft University of Technology. Accessed March 08, 2021. http://resolver.tudelft.nl/uuid:11888a7b-1e54-424d-9daa-8ff48de58345.

MLA Handbook (7th Edition):

Lelekas, Ioannis (author). “Top-Down Networks: A coarse-to-fine reimagination of CNNs.” 2020. Web. 08 Mar 2021.

Vancouver:

Lelekas I(. Top-Down Networks: A coarse-to-fine reimagination of CNNs. [Internet] [Masters thesis]. Delft University of Technology; 2020. [cited 2021 Mar 08]. Available from: http://resolver.tudelft.nl/uuid:11888a7b-1e54-424d-9daa-8ff48de58345.

Council of Science Editors:

Lelekas I(. Top-Down Networks: A coarse-to-fine reimagination of CNNs. [Masters Thesis]. Delft University of Technology; 2020. Available from: http://resolver.tudelft.nl/uuid:11888a7b-1e54-424d-9daa-8ff48de58345


University of Queensland

9. Yang, Siqi. On the robustness of object and face detection: false positives, attacks and adaptability.

Degree: School of Information Technology and Electrical Engineering, 2020, University of Queensland

Subjects/Keywords: Object detection; Face detection; Robustness; False positive; Adversarial attack; Domain adaptation; 0801 Artificial Intelligence and Image Processing; 0803 Computer Software

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yang, S. (2020). On the robustness of object and face detection: false positives, attacks and adaptability. (Thesis). University of Queensland. Retrieved from https://espace.library.uq.edu.au/view/UQ:f1b1c87/thumbnail_s4366230_final_thesis_t.jpg ; https://espace.library.uq.edu.au/view/UQ:f1b1c87/s4366230_final_thesis.pdf ; https://espace.library.uq.edu.au/view/UQ:f1b1c87

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Yang, Siqi. “On the robustness of object and face detection: false positives, attacks and adaptability.” 2020. Thesis, University of Queensland. Accessed March 08, 2021. https://espace.library.uq.edu.au/view/UQ:f1b1c87/thumbnail_s4366230_final_thesis_t.jpg ; https://espace.library.uq.edu.au/view/UQ:f1b1c87/s4366230_final_thesis.pdf ; https://espace.library.uq.edu.au/view/UQ:f1b1c87.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Yang, Siqi. “On the robustness of object and face detection: false positives, attacks and adaptability.” 2020. Web. 08 Mar 2021.

Vancouver:

Yang S. On the robustness of object and face detection: false positives, attacks and adaptability. [Internet] [Thesis]. University of Queensland; 2020. [cited 2021 Mar 08]. Available from: https://espace.library.uq.edu.au/view/UQ:f1b1c87/thumbnail_s4366230_final_thesis_t.jpg ; https://espace.library.uq.edu.au/view/UQ:f1b1c87/s4366230_final_thesis.pdf ; https://espace.library.uq.edu.au/view/UQ:f1b1c87.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Yang S. On the robustness of object and face detection: false positives, attacks and adaptability. [Thesis]. University of Queensland; 2020. Available from: https://espace.library.uq.edu.au/view/UQ:f1b1c87/thumbnail_s4366230_final_thesis_t.jpg ; https://espace.library.uq.edu.au/view/UQ:f1b1c87/s4366230_final_thesis.pdf ; https://espace.library.uq.edu.au/view/UQ:f1b1c87

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

.