Advanced search options
You searched for subject:(Adversarial attacks)
.
Showing records 1 – 14 of
14 total matches.
▼ Search Limiters
University of Illinois – Urbana-Champaign
1. Agarwal, Rishika. Adversarial attacks and defenses for generative models.
Degree: MS, Computer Science, 2019, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/104943
Subjects/Keywords: Adversarial Machine learning; generative models; attacks; defenses
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Agarwal, R. (2019). Adversarial attacks and defenses for generative models. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/104943
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Agarwal, Rishika. “Adversarial attacks and defenses for generative models.” 2019. Thesis, University of Illinois – Urbana-Champaign. Accessed March 07, 2021. http://hdl.handle.net/2142/104943.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Agarwal, Rishika. “Adversarial attacks and defenses for generative models.” 2019. Web. 07 Mar 2021.
Vancouver:
Agarwal R. Adversarial attacks and defenses for generative models. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2019. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2142/104943.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Agarwal R. Adversarial attacks and defenses for generative models. [Thesis]. University of Illinois – Urbana-Champaign; 2019. Available from: http://hdl.handle.net/2142/104943
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Texas A&M University
2. He, Yukun. Robust Dynamical Step Adversarial Training Defense for Deep Neural Networks.
Degree: MS, Computer Engineering, 2018, Texas A&M University
URL: http://hdl.handle.net/1969.1/174606
Subjects/Keywords: Adversarial Attacks; Adversarial Training; PGD; Dynamical Step; Neural Network; Defense
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
He, Y. (2018). Robust Dynamical Step Adversarial Training Defense for Deep Neural Networks. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/174606
Chicago Manual of Style (16th Edition):
He, Yukun. “Robust Dynamical Step Adversarial Training Defense for Deep Neural Networks.” 2018. Masters Thesis, Texas A&M University. Accessed March 07, 2021. http://hdl.handle.net/1969.1/174606.
MLA Handbook (7th Edition):
He, Yukun. “Robust Dynamical Step Adversarial Training Defense for Deep Neural Networks.” 2018. Web. 07 Mar 2021.
Vancouver:
He Y. Robust Dynamical Step Adversarial Training Defense for Deep Neural Networks. [Internet] [Masters thesis]. Texas A&M University; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1969.1/174606.
Council of Science Editors:
He Y. Robust Dynamical Step Adversarial Training Defense for Deep Neural Networks. [Masters Thesis]. Texas A&M University; 2018. Available from: http://hdl.handle.net/1969.1/174606
University of Michigan
3. Xiao, Chaowei. Machine Learning in Adversarial Environments.
Degree: PhD, Computer Science & Engineering, 2020, University of Michigan
URL: http://hdl.handle.net/2027.42/162944
Subjects/Keywords: adversarial machine learning; security; adversarial attacks; adversarial examples; defense; Computer Science; Engineering
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Xiao, C. (2020). Machine Learning in Adversarial Environments. (Doctoral Dissertation). University of Michigan. Retrieved from http://hdl.handle.net/2027.42/162944
Chicago Manual of Style (16th Edition):
Xiao, Chaowei. “Machine Learning in Adversarial Environments.” 2020. Doctoral Dissertation, University of Michigan. Accessed March 07, 2021. http://hdl.handle.net/2027.42/162944.
MLA Handbook (7th Edition):
Xiao, Chaowei. “Machine Learning in Adversarial Environments.” 2020. Web. 07 Mar 2021.
Vancouver:
Xiao C. Machine Learning in Adversarial Environments. [Internet] [Doctoral dissertation]. University of Michigan; 2020. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2027.42/162944.
Council of Science Editors:
Xiao C. Machine Learning in Adversarial Environments. [Doctoral Dissertation]. University of Michigan; 2020. Available from: http://hdl.handle.net/2027.42/162944
Vanderbilt University
4. -4180-995X. Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions.
Degree: MS, Computer Science, 2020, Vanderbilt University
URL: http://hdl.handle.net/1803/10082
Subjects/Keywords: Neural Network Robustness; Anomalies; Adversarial Attacks; Poisoning Attacks; Radial Basis Functions
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
-4180-995X. (2020). Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions. (Masters Thesis). Vanderbilt University. Retrieved from http://hdl.handle.net/1803/10082
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-4180-995X. “Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions.” 2020. Masters Thesis, Vanderbilt University. Accessed March 07, 2021. http://hdl.handle.net/1803/10082.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-4180-995X. “Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions.” 2020. Web. 07 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-4180-995X. Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions. [Internet] [Masters thesis]. Vanderbilt University; 2020. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1803/10082.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-4180-995X. Enhancing the Robustness of Deep Neural Networks Against Security Threats Using Radial Basis Functions. [Masters Thesis]. Vanderbilt University; 2020. Available from: http://hdl.handle.net/1803/10082
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
San Jose State University
5. Mani, Nag. On Adversarial Attacks on Deep Learning Models.
Degree: MS, Computer Science, 2019, San Jose State University
URL: https://doi.org/10.31979/etd.49ee-sknc
;
https://scholarworks.sjsu.edu/etd_projects/742
Subjects/Keywords: image neural net attacks; adversarial retraining techniques; Artificial Intelligence and Robotics
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Mani, N. (2019). On Adversarial Attacks on Deep Learning Models. (Masters Thesis). San Jose State University. Retrieved from https://doi.org/10.31979/etd.49ee-sknc ; https://scholarworks.sjsu.edu/etd_projects/742
Chicago Manual of Style (16th Edition):
Mani, Nag. “On Adversarial Attacks on Deep Learning Models.” 2019. Masters Thesis, San Jose State University. Accessed March 07, 2021. https://doi.org/10.31979/etd.49ee-sknc ; https://scholarworks.sjsu.edu/etd_projects/742.
MLA Handbook (7th Edition):
Mani, Nag. “On Adversarial Attacks on Deep Learning Models.” 2019. Web. 07 Mar 2021.
Vancouver:
Mani N. On Adversarial Attacks on Deep Learning Models. [Internet] [Masters thesis]. San Jose State University; 2019. [cited 2021 Mar 07]. Available from: https://doi.org/10.31979/etd.49ee-sknc ; https://scholarworks.sjsu.edu/etd_projects/742.
Council of Science Editors:
Mani N. On Adversarial Attacks on Deep Learning Models. [Masters Thesis]. San Jose State University; 2019. Available from: https://doi.org/10.31979/etd.49ee-sknc ; https://scholarworks.sjsu.edu/etd_projects/742
University of Illinois – Chicago
6. Biradar, Sachin G. User-Centric Adversarial Perturbations to Protect Privacy in Social Networks.
Degree: 2019, University of Illinois – Chicago
URL: http://hdl.handle.net/10027/23723
Subjects/Keywords: Privacy; Social Networks; Adversarial Attacks; Fooling Algorithm; Graph Convolutional Networks
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Biradar, S. G. (2019). User-Centric Adversarial Perturbations to Protect Privacy in Social Networks. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/23723
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Biradar, Sachin G. “User-Centric Adversarial Perturbations to Protect Privacy in Social Networks.” 2019. Thesis, University of Illinois – Chicago. Accessed March 07, 2021. http://hdl.handle.net/10027/23723.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Biradar, Sachin G. “User-Centric Adversarial Perturbations to Protect Privacy in Social Networks.” 2019. Web. 07 Mar 2021.
Vancouver:
Biradar SG. User-Centric Adversarial Perturbations to Protect Privacy in Social Networks. [Internet] [Thesis]. University of Illinois – Chicago; 2019. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10027/23723.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Biradar SG. User-Centric Adversarial Perturbations to Protect Privacy in Social Networks. [Thesis]. University of Illinois – Chicago; 2019. Available from: http://hdl.handle.net/10027/23723
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
University of Washington
7. Burkard, Cody. Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?.
Degree: 2017, University of Washington
URL: http://hdl.handle.net/1773/39901
Subjects/Keywords: Adversarial Examples; Adversarial Machine Learning; Evasion Attacks; Machine Learning; Neural Networks; Security; Computer science; To Be Assigned
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Burkard, C. (2017). Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?. (Thesis). University of Washington. Retrieved from http://hdl.handle.net/1773/39901
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Burkard, Cody. “Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?.” 2017. Thesis, University of Washington. Accessed March 07, 2021. http://hdl.handle.net/1773/39901.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Burkard, Cody. “Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?.” 2017. Web. 07 Mar 2021.
Vancouver:
Burkard C. Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?. [Internet] [Thesis]. University of Washington; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1773/39901.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Burkard C. Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?. [Thesis]. University of Washington; 2017. Available from: http://hdl.handle.net/1773/39901
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
University of Toronto
8. Bose, Avishek. Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization.
Degree: 2018, University of Toronto
URL: http://hdl.handle.net/1807/91439
Subjects/Keywords: Adversarial Attacks; Computer Vision; Face Detection; Faster R-CNN; Machine Learning; Neural Networks; 0800
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Bose, A. (2018). Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization. (Masters Thesis). University of Toronto. Retrieved from http://hdl.handle.net/1807/91439
Chicago Manual of Style (16th Edition):
Bose, Avishek. “Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization.” 2018. Masters Thesis, University of Toronto. Accessed March 07, 2021. http://hdl.handle.net/1807/91439.
MLA Handbook (7th Edition):
Bose, Avishek. “Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization.” 2018. Web. 07 Mar 2021.
Vancouver:
Bose A. Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization. [Internet] [Masters thesis]. University of Toronto; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1807/91439.
Council of Science Editors:
Bose A. Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization. [Masters Thesis]. University of Toronto; 2018. Available from: http://hdl.handle.net/1807/91439
King Abdullah University of Science and Technology
9. Alfarra, Motasem. Applications of Tropical Geometry in Deep Neural Networks.
Degree: Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, 2020, King Abdullah University of Science and Technology
URL: http://hdl.handle.net/10754/662473
Subjects/Keywords: Deep Learning; Deep Neural Networks; Tropical Geometry; Network Pruning; Lottery Ticket Hypothesis; Adversarial Attacks
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Alfarra, M. (2020). Applications of Tropical Geometry in Deep Neural Networks. (Thesis). King Abdullah University of Science and Technology. Retrieved from http://hdl.handle.net/10754/662473
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Alfarra, Motasem. “Applications of Tropical Geometry in Deep Neural Networks.” 2020. Thesis, King Abdullah University of Science and Technology. Accessed March 07, 2021. http://hdl.handle.net/10754/662473.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Alfarra, Motasem. “Applications of Tropical Geometry in Deep Neural Networks.” 2020. Web. 07 Mar 2021.
Vancouver:
Alfarra M. Applications of Tropical Geometry in Deep Neural Networks. [Internet] [Thesis]. King Abdullah University of Science and Technology; 2020. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10754/662473.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Alfarra M. Applications of Tropical Geometry in Deep Neural Networks. [Thesis]. King Abdullah University of Science and Technology; 2020. Available from: http://hdl.handle.net/10754/662473
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
University of Melbourne
10. Yang, Tianci. Multi-observer approach for estimation and control under adversarial attacks.
Degree: 2019, University of Melbourne
URL: http://hdl.handle.net/11343/227777
Subjects/Keywords: networked control systems; cyber-physical systems; adversarial attacks; sensor attacks; actuator attacks; control systems; multi-observer; attack detection and isolation; secure estimation
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Yang, T. (2019). Multi-observer approach for estimation and control under adversarial attacks. (Doctoral Dissertation). University of Melbourne. Retrieved from http://hdl.handle.net/11343/227777
Chicago Manual of Style (16th Edition):
Yang, Tianci. “Multi-observer approach for estimation and control under adversarial attacks.” 2019. Doctoral Dissertation, University of Melbourne. Accessed March 07, 2021. http://hdl.handle.net/11343/227777.
MLA Handbook (7th Edition):
Yang, Tianci. “Multi-observer approach for estimation and control under adversarial attacks.” 2019. Web. 07 Mar 2021.
Vancouver:
Yang T. Multi-observer approach for estimation and control under adversarial attacks. [Internet] [Doctoral dissertation]. University of Melbourne; 2019. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/11343/227777.
Council of Science Editors:
Yang T. Multi-observer approach for estimation and control under adversarial attacks. [Doctoral Dissertation]. University of Melbourne; 2019. Available from: http://hdl.handle.net/11343/227777
Delft University of Technology
11. Lelekas, Ioannis (author). Top-Down Networks: A coarse-to-fine reimagination of CNNs.
Degree: 2020, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:11888a7b-1e54-424d-9daa-8ff48de58345
Subjects/Keywords: Computer Vision; Deep Learning; Convolutional Neural Networks; Top-Down; Fine-to-Coarse; Coarse-to-Fine; Adversarial attacks; Adversarial robustness; Gradcam; Object localization
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Lelekas, I. (. (2020). Top-Down Networks: A coarse-to-fine reimagination of CNNs. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:11888a7b-1e54-424d-9daa-8ff48de58345
Chicago Manual of Style (16th Edition):
Lelekas, Ioannis (author). “Top-Down Networks: A coarse-to-fine reimagination of CNNs.” 2020. Masters Thesis, Delft University of Technology. Accessed March 07, 2021. http://resolver.tudelft.nl/uuid:11888a7b-1e54-424d-9daa-8ff48de58345.
MLA Handbook (7th Edition):
Lelekas, Ioannis (author). “Top-Down Networks: A coarse-to-fine reimagination of CNNs.” 2020. Web. 07 Mar 2021.
Vancouver:
Lelekas I(. Top-Down Networks: A coarse-to-fine reimagination of CNNs. [Internet] [Masters thesis]. Delft University of Technology; 2020. [cited 2021 Mar 07]. Available from: http://resolver.tudelft.nl/uuid:11888a7b-1e54-424d-9daa-8ff48de58345.
Council of Science Editors:
Lelekas I(. Top-Down Networks: A coarse-to-fine reimagination of CNNs. [Masters Thesis]. Delft University of Technology; 2020. Available from: http://resolver.tudelft.nl/uuid:11888a7b-1e54-424d-9daa-8ff48de58345
George Mason University
12. Bhat, Sahil. Defense Against Cache Based Micro-architectural Side Channel Attacks .
Degree: George Mason University
URL: http://hdl.handle.net/1920/11489
Subjects/Keywords: side channel attacks; malware detection; hardware performance counters; hardware security; Flush + Reload; adversarial
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Bhat, S. (n.d.). Defense Against Cache Based Micro-architectural Side Channel Attacks . (Thesis). George Mason University. Retrieved from http://hdl.handle.net/1920/11489
Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Bhat, Sahil. “Defense Against Cache Based Micro-architectural Side Channel Attacks .” Thesis, George Mason University. Accessed March 07, 2021. http://hdl.handle.net/1920/11489.
Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Bhat, Sahil. “Defense Against Cache Based Micro-architectural Side Channel Attacks .” Web. 07 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
No year of publication.
Vancouver:
Bhat S. Defense Against Cache Based Micro-architectural Side Channel Attacks . [Internet] [Thesis]. George Mason University; [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1920/11489.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.
Council of Science Editors:
Bhat S. Defense Against Cache Based Micro-architectural Side Channel Attacks . [Thesis]. George Mason University; Available from: http://hdl.handle.net/1920/11489
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.
13. Chen, Lingwei. Intelligent Malware Detection Using File-to-file Relations and Enhancing its Security against Adversarial Attacks.
Degree: PhD, Lane Department of Computer Science and Electrical Engineering, 2019, West Virginia University
URL: https://doi.org/10.33915/etd.3844
;
https://researchrepository.wvu.edu/etd/3844
Subjects/Keywords: Malware Detection; Machine Learning; Data Mining; File-to-file Relations; Adversarial Attacks and Defenses; Information Security
…4.17 Different scenarios of the adversarial attacks. With the direction of the inward arrow… …the adversarial attacks are depicted with the knowledge of (X, D̂), (X, D… …of AdvAttack and other adversarial attacks: OriginalClassifier (0), different… …adversarial attacks (Method 1 - 4) and AdvAttack (5)… …opponents. To be resilient against the malware attacks, we analyze the general adversarial…
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Chen, L. (2019). Intelligent Malware Detection Using File-to-file Relations and Enhancing its Security against Adversarial Attacks. (Doctoral Dissertation). West Virginia University. Retrieved from https://doi.org/10.33915/etd.3844 ; https://researchrepository.wvu.edu/etd/3844
Chicago Manual of Style (16th Edition):
Chen, Lingwei. “Intelligent Malware Detection Using File-to-file Relations and Enhancing its Security against Adversarial Attacks.” 2019. Doctoral Dissertation, West Virginia University. Accessed March 07, 2021. https://doi.org/10.33915/etd.3844 ; https://researchrepository.wvu.edu/etd/3844.
MLA Handbook (7th Edition):
Chen, Lingwei. “Intelligent Malware Detection Using File-to-file Relations and Enhancing its Security against Adversarial Attacks.” 2019. Web. 07 Mar 2021.
Vancouver:
Chen L. Intelligent Malware Detection Using File-to-file Relations and Enhancing its Security against Adversarial Attacks. [Internet] [Doctoral dissertation]. West Virginia University; 2019. [cited 2021 Mar 07]. Available from: https://doi.org/10.33915/etd.3844 ; https://researchrepository.wvu.edu/etd/3844.
Council of Science Editors:
Chen L. Intelligent Malware Detection Using File-to-file Relations and Enhancing its Security against Adversarial Attacks. [Doctoral Dissertation]. West Virginia University; 2019. Available from: https://doi.org/10.33915/etd.3844 ; https://researchrepository.wvu.edu/etd/3844
14. Sanchez Vicarte, Jose Rodrigo. Game of threads: Enabling asynchronous poisoning attacks.
Degree: MS, Electrical & Computer Engr, 2019, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/106293
…implications of asynchronous training algorithms on machine learning model integrity in adversarial… …TEEs should in theory defeat integrity attacks given even a supervisor level adversary. For… …conventional poisoning attacks [25, 26, 27, 28, 29, 30], which attempt to influence model… …TEEs, this work introduces asynchronous poisoning attacks (APAs), which show how… …integrity attacks are still possible even when asynchronous training runs within TEEs.1 APAs are…
Record Details
Similar Records
❌
APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager
APA (6th Edition):
Sanchez Vicarte, J. R. (2019). Game of threads: Enabling asynchronous poisoning attacks. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/106293
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Sanchez Vicarte, Jose Rodrigo. “Game of threads: Enabling asynchronous poisoning attacks.” 2019. Thesis, University of Illinois – Urbana-Champaign. Accessed March 07, 2021. http://hdl.handle.net/2142/106293.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Sanchez Vicarte, Jose Rodrigo. “Game of threads: Enabling asynchronous poisoning attacks.” 2019. Web. 07 Mar 2021.
Vancouver:
Sanchez Vicarte JR. Game of threads: Enabling asynchronous poisoning attacks. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2019. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2142/106293.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Sanchez Vicarte JR. Game of threads: Enabling asynchronous poisoning attacks. [Thesis]. University of Illinois – Urbana-Champaign; 2019. Available from: http://hdl.handle.net/2142/106293
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation