Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Neural accelerator). Showing records 1 – 19 of 19 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


University of Toronto

1. Siu, Kevin. Reducing Off-chip Memory Accesses in Deep Neural Network Accelerators.

Degree: 2019, University of Toronto

The popularity of deep neural networks (DNNs) has led to widespread development of specialized hardware for accelerating their computations. Many designs employ a memory system… (more)

Subjects/Keywords: Hardware Accelerator; Neural Network; 0464

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Siu, K. (2019). Reducing Off-chip Memory Accesses in Deep Neural Network Accelerators. (Masters Thesis). University of Toronto. Retrieved from http://hdl.handle.net/1807/98351

Chicago Manual of Style (16th Edition):

Siu, Kevin. “Reducing Off-chip Memory Accesses in Deep Neural Network Accelerators.” 2019. Masters Thesis, University of Toronto. Accessed March 02, 2021. http://hdl.handle.net/1807/98351.

MLA Handbook (7th Edition):

Siu, Kevin. “Reducing Off-chip Memory Accesses in Deep Neural Network Accelerators.” 2019. Web. 02 Mar 2021.

Vancouver:

Siu K. Reducing Off-chip Memory Accesses in Deep Neural Network Accelerators. [Internet] [Masters thesis]. University of Toronto; 2019. [cited 2021 Mar 02]. Available from: http://hdl.handle.net/1807/98351.

Council of Science Editors:

Siu K. Reducing Off-chip Memory Accesses in Deep Neural Network Accelerators. [Masters Thesis]. University of Toronto; 2019. Available from: http://hdl.handle.net/1807/98351


Delft University of Technology

2. Marigi Rajanarayana, Shashanka (author). SASCNN: A Systolic Array Simulator for CNN.

Degree: 2019, Delft University of Technology

Convolution Neural Networks (CNN) are used in many applications ranging from real-time object detection to robot-motion planning. CNNs are implemented on high-performance systems like multi-core… (more)

Subjects/Keywords: Deep neural networks; CNN; Accelerator; Simulator; systolic array; Near-memory computing.

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Marigi Rajanarayana, S. (. (2019). SASCNN: A Systolic Array Simulator for CNN. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:5266a567-9864-4ffd-8e25-0d4d0e5f322a

Chicago Manual of Style (16th Edition):

Marigi Rajanarayana, Shashanka (author). “SASCNN: A Systolic Array Simulator for CNN.” 2019. Masters Thesis, Delft University of Technology. Accessed March 02, 2021. http://resolver.tudelft.nl/uuid:5266a567-9864-4ffd-8e25-0d4d0e5f322a.

MLA Handbook (7th Edition):

Marigi Rajanarayana, Shashanka (author). “SASCNN: A Systolic Array Simulator for CNN.” 2019. Web. 02 Mar 2021.

Vancouver:

Marigi Rajanarayana S(. SASCNN: A Systolic Array Simulator for CNN. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Mar 02]. Available from: http://resolver.tudelft.nl/uuid:5266a567-9864-4ffd-8e25-0d4d0e5f322a.

Council of Science Editors:

Marigi Rajanarayana S(. SASCNN: A Systolic Array Simulator for CNN. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:5266a567-9864-4ffd-8e25-0d4d0e5f322a


University of Toronto

3. Hall, Mathew Kent. Architecture and Automation for Efficient Convolutional Neural Network Acceleration on Field Programmable Gate Arrays.

Degree: 2020, University of Toronto

This thesis explores Convolutional Neural Network (CNN) inference accelerator architecture for FPGAs. We compare and contrast three high-level architectures, and elect to build a layer-pipelined… (more)

Subjects/Keywords: Accelerator; Computer Architecture; Convolutional Neural Networks; FPGA; 0464

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hall, M. K. (2020). Architecture and Automation for Efficient Convolutional Neural Network Acceleration on Field Programmable Gate Arrays. (Masters Thesis). University of Toronto. Retrieved from http://hdl.handle.net/1807/103545

Chicago Manual of Style (16th Edition):

Hall, Mathew Kent. “Architecture and Automation for Efficient Convolutional Neural Network Acceleration on Field Programmable Gate Arrays.” 2020. Masters Thesis, University of Toronto. Accessed March 02, 2021. http://hdl.handle.net/1807/103545.

MLA Handbook (7th Edition):

Hall, Mathew Kent. “Architecture and Automation for Efficient Convolutional Neural Network Acceleration on Field Programmable Gate Arrays.” 2020. Web. 02 Mar 2021.

Vancouver:

Hall MK. Architecture and Automation for Efficient Convolutional Neural Network Acceleration on Field Programmable Gate Arrays. [Internet] [Masters thesis]. University of Toronto; 2020. [cited 2021 Mar 02]. Available from: http://hdl.handle.net/1807/103545.

Council of Science Editors:

Hall MK. Architecture and Automation for Efficient Convolutional Neural Network Acceleration on Field Programmable Gate Arrays. [Masters Thesis]. University of Toronto; 2020. Available from: http://hdl.handle.net/1807/103545


University of Cincinnati

4. Anderson, Thomas. Built-In Self Training of Hardware-Based Neural Networks.

Degree: MS, Engineering and Applied Science: Computer Engineering, 2017, University of Cincinnati

 Articial neural networks and deep learning are a topic of increasing interest in computing. This has spurred investigation into dedicated hardware like accelerators to speed… (more)

Subjects/Keywords: Computer Engineering; neural networks; backpropagation algorithm; training; accelerator; hardware; function approximation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Anderson, T. (2017). Built-In Self Training of Hardware-Based Neural Networks. (Masters Thesis). University of Cincinnati. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=ucin1512039036199393

Chicago Manual of Style (16th Edition):

Anderson, Thomas. “Built-In Self Training of Hardware-Based Neural Networks.” 2017. Masters Thesis, University of Cincinnati. Accessed March 02, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1512039036199393.

MLA Handbook (7th Edition):

Anderson, Thomas. “Built-In Self Training of Hardware-Based Neural Networks.” 2017. Web. 02 Mar 2021.

Vancouver:

Anderson T. Built-In Self Training of Hardware-Based Neural Networks. [Internet] [Masters thesis]. University of Cincinnati; 2017. [cited 2021 Mar 02]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=ucin1512039036199393.

Council of Science Editors:

Anderson T. Built-In Self Training of Hardware-Based Neural Networks. [Masters Thesis]. University of Cincinnati; 2017. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=ucin1512039036199393


NSYSU

5. Wu, Pei-Hsuan. Architecture Design and Implementation of Deep Neural Network Hardware Accelerators.

Degree: Master, Computer Science and Engineering, 2018, NSYSU

 Deep Neural Networks (DNN) widely used in computer vision applications have superior performance in image classification and object detection. However, the huge amount of data… (more)

Subjects/Keywords: CNN hardware accelerator; deep neural network (DNN); convolutional neural network (CNN); machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wu, P. (2018). Architecture Design and Implementation of Deep Neural Network Hardware Accelerators. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0729118-154714

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Wu, Pei-Hsuan. “Architecture Design and Implementation of Deep Neural Network Hardware Accelerators.” 2018. Thesis, NSYSU. Accessed March 02, 2021. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0729118-154714.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Wu, Pei-Hsuan. “Architecture Design and Implementation of Deep Neural Network Hardware Accelerators.” 2018. Web. 02 Mar 2021.

Vancouver:

Wu P. Architecture Design and Implementation of Deep Neural Network Hardware Accelerators. [Internet] [Thesis]. NSYSU; 2018. [cited 2021 Mar 02]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0729118-154714.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Wu P. Architecture Design and Implementation of Deep Neural Network Hardware Accelerators. [Thesis]. NSYSU; 2018. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0729118-154714

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Michigan

6. Cho, Sung-Gun. Scalable, High-Performance Accelerators for Neural Network and Signal Processing: Logical and Physical Design Considerations.

Degree: PhD, Electrical and Computer Engineering, 2020, University of Michigan

 Recent success of machine learning in a broad spectrum of fields has awakened a new era of artificial intelligence (AI). From assistance tailored to personal… (more)

Subjects/Keywords: Deep learning accelerator; Neuromorphic computing; FPGA-integrated accelerator; Chiplet; Spiking neural network; Matrix multiplication acceleration; Electrical Engineering; Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cho, S. (2020). Scalable, High-Performance Accelerators for Neural Network and Signal Processing: Logical and Physical Design Considerations. (Doctoral Dissertation). University of Michigan. Retrieved from http://hdl.handle.net/2027.42/163079

Chicago Manual of Style (16th Edition):

Cho, Sung-Gun. “Scalable, High-Performance Accelerators for Neural Network and Signal Processing: Logical and Physical Design Considerations.” 2020. Doctoral Dissertation, University of Michigan. Accessed March 02, 2021. http://hdl.handle.net/2027.42/163079.

MLA Handbook (7th Edition):

Cho, Sung-Gun. “Scalable, High-Performance Accelerators for Neural Network and Signal Processing: Logical and Physical Design Considerations.” 2020. Web. 02 Mar 2021.

Vancouver:

Cho S. Scalable, High-Performance Accelerators for Neural Network and Signal Processing: Logical and Physical Design Considerations. [Internet] [Doctoral dissertation]. University of Michigan; 2020. [cited 2021 Mar 02]. Available from: http://hdl.handle.net/2027.42/163079.

Council of Science Editors:

Cho S. Scalable, High-Performance Accelerators for Neural Network and Signal Processing: Logical and Physical Design Considerations. [Doctoral Dissertation]. University of Michigan; 2020. Available from: http://hdl.handle.net/2027.42/163079


Louisiana State University

7. Zhao, Zhou. Techniques of Energy-Efficient VLSI Chip Design for High-Performance Computing.

Degree: PhD, Electrical and Electronics, 2018, Louisiana State University

  How to implement quality computing with the limited power budget is the key factor to move very large scale integration (VLSI) chip design forward.… (more)

Subjects/Keywords: Low power VLSI design; neural computing; adiabatic logic; approximated computing; analog accelerator

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zhao, Z. (2018). Techniques of Energy-Efficient VLSI Chip Design for High-Performance Computing. (Doctoral Dissertation). Louisiana State University. Retrieved from https://digitalcommons.lsu.edu/gradschool_dissertations/4702

Chicago Manual of Style (16th Edition):

Zhao, Zhou. “Techniques of Energy-Efficient VLSI Chip Design for High-Performance Computing.” 2018. Doctoral Dissertation, Louisiana State University. Accessed March 02, 2021. https://digitalcommons.lsu.edu/gradschool_dissertations/4702.

MLA Handbook (7th Edition):

Zhao, Zhou. “Techniques of Energy-Efficient VLSI Chip Design for High-Performance Computing.” 2018. Web. 02 Mar 2021.

Vancouver:

Zhao Z. Techniques of Energy-Efficient VLSI Chip Design for High-Performance Computing. [Internet] [Doctoral dissertation]. Louisiana State University; 2018. [cited 2021 Mar 02]. Available from: https://digitalcommons.lsu.edu/gradschool_dissertations/4702.

Council of Science Editors:

Zhao Z. Techniques of Energy-Efficient VLSI Chip Design for High-Performance Computing. [Doctoral Dissertation]. Louisiana State University; 2018. Available from: https://digitalcommons.lsu.edu/gradschool_dissertations/4702


University of Michigan

8. Wang, Jingcheng. Interconnect and Memory Design for Intelligent Mobile System.

Degree: PhD, Electrical and Computer Engineering, 2020, University of Michigan

 Technology scaling has driven the transistor to a smaller area, higher performance and lower power consuming which leads us into the mobile and edge computing… (more)

Subjects/Keywords: interconnect; low power SRAM; Compute-In-Memory; Neural Network Accelerator; low leakage SRAM; energy-efficient digital circuit; Electrical Engineering; Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wang, J. (2020). Interconnect and Memory Design for Intelligent Mobile System. (Doctoral Dissertation). University of Michigan. Retrieved from http://hdl.handle.net/2027.42/155232

Chicago Manual of Style (16th Edition):

Wang, Jingcheng. “Interconnect and Memory Design for Intelligent Mobile System.” 2020. Doctoral Dissertation, University of Michigan. Accessed March 02, 2021. http://hdl.handle.net/2027.42/155232.

MLA Handbook (7th Edition):

Wang, Jingcheng. “Interconnect and Memory Design for Intelligent Mobile System.” 2020. Web. 02 Mar 2021.

Vancouver:

Wang J. Interconnect and Memory Design for Intelligent Mobile System. [Internet] [Doctoral dissertation]. University of Michigan; 2020. [cited 2021 Mar 02]. Available from: http://hdl.handle.net/2027.42/155232.

Council of Science Editors:

Wang J. Interconnect and Memory Design for Intelligent Mobile System. [Doctoral Dissertation]. University of Michigan; 2020. Available from: http://hdl.handle.net/2027.42/155232

9. St Amant, Renee Marie. Enabling high-performance, mixed-signal approximate computing.

Degree: PhD, Computer Science, 2014, University of Texas – Austin

 For decades, the semiconductor industry enjoyed exponential improvements in microprocessor power and performance with the device scaling of successive technology generations. Scaling limitations at sub-micron… (more)

Subjects/Keywords: Approximate computing; Neural branch prediction; Neural accelerator; General purpose computing

…work presents a neural accelerator that targets approximation-tolerant code. Analog neural… …5.2 Mixed-Signal, Neural Accelerator (A-NPU) Design . . . . . . 5.2.1 Analog… …ANU). . . . . . . . . Mixed-signal, neural accelerator (A-NPU). Only four… …x5D; will benefit from this work. Chapter 5 presents a mixed-signal, neural accelerator… …both the neural predictor and neural accelerator are mixed-signal designs. The neural branch… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

St Amant, R. M. (2014). Enabling high-performance, mixed-signal approximate computing. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/25025

Chicago Manual of Style (16th Edition):

St Amant, Renee Marie. “Enabling high-performance, mixed-signal approximate computing.” 2014. Doctoral Dissertation, University of Texas – Austin. Accessed March 02, 2021. http://hdl.handle.net/2152/25025.

MLA Handbook (7th Edition):

St Amant, Renee Marie. “Enabling high-performance, mixed-signal approximate computing.” 2014. Web. 02 Mar 2021.

Vancouver:

St Amant RM. Enabling high-performance, mixed-signal approximate computing. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2014. [cited 2021 Mar 02]. Available from: http://hdl.handle.net/2152/25025.

Council of Science Editors:

St Amant RM. Enabling high-performance, mixed-signal approximate computing. [Doctoral Dissertation]. University of Texas – Austin; 2014. Available from: http://hdl.handle.net/2152/25025


Georgia Tech

10. Ko, Sho. Efficient Pipelined ReRAM-Based Processing-In-Memory Architecture for Convolutional Neural Network Inference.

Degree: MS, Electrical and Computer Engineering, 2020, Georgia Tech

 This research work presents a design of an analog ReRAM-based PIM (processing-in-memory) architecture for fast and efficient CNN (convolutional neural network) inference. For the overall… (more)

Subjects/Keywords: hardware accelerator; ReRAM (resistive random access memory); PIM (processing-in-memory); CNN (convolutional neural network); NoC (network-on-chip); SMART flow control

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ko, S. (2020). Efficient Pipelined ReRAM-Based Processing-In-Memory Architecture for Convolutional Neural Network Inference. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62806

Chicago Manual of Style (16th Edition):

Ko, Sho. “Efficient Pipelined ReRAM-Based Processing-In-Memory Architecture for Convolutional Neural Network Inference.” 2020. Masters Thesis, Georgia Tech. Accessed March 02, 2021. http://hdl.handle.net/1853/62806.

MLA Handbook (7th Edition):

Ko, Sho. “Efficient Pipelined ReRAM-Based Processing-In-Memory Architecture for Convolutional Neural Network Inference.” 2020. Web. 02 Mar 2021.

Vancouver:

Ko S. Efficient Pipelined ReRAM-Based Processing-In-Memory Architecture for Convolutional Neural Network Inference. [Internet] [Masters thesis]. Georgia Tech; 2020. [cited 2021 Mar 02]. Available from: http://hdl.handle.net/1853/62806.

Council of Science Editors:

Ko S. Efficient Pipelined ReRAM-Based Processing-In-Memory Architecture for Convolutional Neural Network Inference. [Masters Thesis]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62806


University of Illinois – Urbana-Champaign

11. Gonugondla, Sujan Kumar. Cross-layer methods for energy-efficient inference using in-memory architectures.

Degree: PhD, Electrical & Computer Engr, 2020, University of Illinois – Urbana-Champaign

 In the near future, we will be surrounded by intelligent devices that transform the way we interact with the world. These devices need to acquire… (more)

Subjects/Keywords: deep neural networks; edge; Inference; machine learning; on-chip learning; in-memory architectures; in-memory computing; application specific integrated circuits; SRAM; quantization; compression; accelerator; energy-efficiency; cross-layer

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gonugondla, S. K. (2020). Cross-layer methods for energy-efficient inference using in-memory architectures. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/107929

Chicago Manual of Style (16th Edition):

Gonugondla, Sujan Kumar. “Cross-layer methods for energy-efficient inference using in-memory architectures.” 2020. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed March 02, 2021. http://hdl.handle.net/2142/107929.

MLA Handbook (7th Edition):

Gonugondla, Sujan Kumar. “Cross-layer methods for energy-efficient inference using in-memory architectures.” 2020. Web. 02 Mar 2021.

Vancouver:

Gonugondla SK. Cross-layer methods for energy-efficient inference using in-memory architectures. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2020. [cited 2021 Mar 02]. Available from: http://hdl.handle.net/2142/107929.

Council of Science Editors:

Gonugondla SK. Cross-layer methods for energy-efficient inference using in-memory architectures. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2020. Available from: http://hdl.handle.net/2142/107929


Universitat Politècnica de Catalunya

12. Riera Villanueva, Marc. Low-power accelerators for cognitive computing.

Degree: Departament d'Arquitectura de Computadors, 2020, Universitat Politècnica de Catalunya

 Les xarxes neuronals profundes (DNN) han aconseguit un èxit enorme en aplicacions cognitives, i són especialment eficients en problemes de classificació i presa de decisions… (more)

Subjects/Keywords: Machine learning; Deep neural network (DNN); Hardware accelerator; Low-power architecture; Computation reuse; Input similarity; Weight repetition; Quantization; Pruning; Àrees temàtiques de la UPC::Informàtica; 004

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Riera Villanueva, M. (2020). Low-power accelerators for cognitive computing. (Thesis). Universitat Politècnica de Catalunya. Retrieved from http://hdl.handle.net/10803/669828

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Riera Villanueva, Marc. “Low-power accelerators for cognitive computing.” 2020. Thesis, Universitat Politècnica de Catalunya. Accessed March 02, 2021. http://hdl.handle.net/10803/669828.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Riera Villanueva, Marc. “Low-power accelerators for cognitive computing.” 2020. Web. 02 Mar 2021.

Vancouver:

Riera Villanueva M. Low-power accelerators for cognitive computing. [Internet] [Thesis]. Universitat Politècnica de Catalunya; 2020. [cited 2021 Mar 02]. Available from: http://hdl.handle.net/10803/669828.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Riera Villanueva M. Low-power accelerators for cognitive computing. [Thesis]. Universitat Politècnica de Catalunya; 2020. Available from: http://hdl.handle.net/10803/669828

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Arizona State University

13. Kim, Minkyu. Energy-Efficient ASIC Accelerators for Machine/Deep Learning Algorithms.

Degree: Electrical Engineering, 2019, Arizona State University

Subjects/Keywords: Electrical engineering; ASIC; Deep learning; Hardware accelerator; Machine learning; Neural networks

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kim, M. (2019). Energy-Efficient ASIC Accelerators for Machine/Deep Learning Algorithms. (Doctoral Dissertation). Arizona State University. Retrieved from http://repository.asu.edu/items/55506

Chicago Manual of Style (16th Edition):

Kim, Minkyu. “Energy-Efficient ASIC Accelerators for Machine/Deep Learning Algorithms.” 2019. Doctoral Dissertation, Arizona State University. Accessed March 02, 2021. http://repository.asu.edu/items/55506.

MLA Handbook (7th Edition):

Kim, Minkyu. “Energy-Efficient ASIC Accelerators for Machine/Deep Learning Algorithms.” 2019. Web. 02 Mar 2021.

Vancouver:

Kim M. Energy-Efficient ASIC Accelerators for Machine/Deep Learning Algorithms. [Internet] [Doctoral dissertation]. Arizona State University; 2019. [cited 2021 Mar 02]. Available from: http://repository.asu.edu/items/55506.

Council of Science Editors:

Kim M. Energy-Efficient ASIC Accelerators for Machine/Deep Learning Algorithms. [Doctoral Dissertation]. Arizona State University; 2019. Available from: http://repository.asu.edu/items/55506


Indian Institute of Science

14. Mohammadi, Mahnaz. An Accelerator for Machine Learning Based Classifiers.

Degree: PhD, Engineering, 2019, Indian Institute of Science

 Artificial Neural Networks (ANNs) are algorithmic techniques that simulate biological neural systems. Typical realization of ANNs are software solutions using High Level Languages (HLLs) such… (more)

Subjects/Keywords: ANN Classifiers; Naive Bayesian Classifier; Pseudoinverse; Radial Basis Function Neural Network Classifier; Support Vector Machines; Reconfigurable Hardware Accelerator; K-means; Radial Basis Function Neural Networks; Reconfigurable Hardware Architecture; Radial Basis Function Neural Network (RBFNN); Computational and Data Sciences

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mohammadi, M. (2019). An Accelerator for Machine Learning Based Classifiers. (Doctoral Dissertation). Indian Institute of Science. Retrieved from http://etd.iisc.ac.in/handle/2005/4245

Chicago Manual of Style (16th Edition):

Mohammadi, Mahnaz. “An Accelerator for Machine Learning Based Classifiers.” 2019. Doctoral Dissertation, Indian Institute of Science. Accessed March 02, 2021. http://etd.iisc.ac.in/handle/2005/4245.

MLA Handbook (7th Edition):

Mohammadi, Mahnaz. “An Accelerator for Machine Learning Based Classifiers.” 2019. Web. 02 Mar 2021.

Vancouver:

Mohammadi M. An Accelerator for Machine Learning Based Classifiers. [Internet] [Doctoral dissertation]. Indian Institute of Science; 2019. [cited 2021 Mar 02]. Available from: http://etd.iisc.ac.in/handle/2005/4245.

Council of Science Editors:

Mohammadi M. An Accelerator for Machine Learning Based Classifiers. [Doctoral Dissertation]. Indian Institute of Science; 2019. Available from: http://etd.iisc.ac.in/handle/2005/4245


Arizona State University

15. Kadambi, Pradyumna. Robust Networks: Neural Networks Robust to Quantization Noise and Analog Computation Noise Based on Natural Gradient.

Degree: Computer Engineering, 2019, Arizona State University

Subjects/Keywords: Artificial intelligence; Computer engineering; Computer science; analog neural network; distillation; fisher information; kl-divergence; non-volatile memory accelerator; quantized neural network

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kadambi, P. (2019). Robust Networks: Neural Networks Robust to Quantization Noise and Analog Computation Noise Based on Natural Gradient. (Masters Thesis). Arizona State University. Retrieved from http://repository.asu.edu/items/55679

Chicago Manual of Style (16th Edition):

Kadambi, Pradyumna. “Robust Networks: Neural Networks Robust to Quantization Noise and Analog Computation Noise Based on Natural Gradient.” 2019. Masters Thesis, Arizona State University. Accessed March 02, 2021. http://repository.asu.edu/items/55679.

MLA Handbook (7th Edition):

Kadambi, Pradyumna. “Robust Networks: Neural Networks Robust to Quantization Noise and Analog Computation Noise Based on Natural Gradient.” 2019. Web. 02 Mar 2021.

Vancouver:

Kadambi P. Robust Networks: Neural Networks Robust to Quantization Noise and Analog Computation Noise Based on Natural Gradient. [Internet] [Masters thesis]. Arizona State University; 2019. [cited 2021 Mar 02]. Available from: http://repository.asu.edu/items/55679.

Council of Science Editors:

Kadambi P. Robust Networks: Neural Networks Robust to Quantization Noise and Analog Computation Noise Based on Natural Gradient. [Masters Thesis]. Arizona State University; 2019. Available from: http://repository.asu.edu/items/55679

16. Wu, Di. Scaling Accelerator-Rich Systems for Big-Data Analytics.

Degree: Computer Science, 2017, UCLA

 As the Dennard scaling is coming to an end, the energy-density of computing devices can no longer increase. As a result, both industry and academia… (more)

Subjects/Keywords: Computer science; accelerator; big-data systems; deep learning; FPGA; heterogeneous systems; neural simulation

…logistic regression. Hereiswe use a two-layer Fig. 11 shows our accelerator design of neural… …75 5 Efficient Job Scheduling in Large-Scale Accelerator Runtime Systems . 77 4.2 4.3… …6 2.2 A two-layer artificial neural network topology used in data classification… …16 3.2 The mapping of the neural microcircuit to our platform. The left side of the… …figure illustrates the hierarchies of the neural microcircuit and the examples of each level in… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wu, D. (2017). Scaling Accelerator-Rich Systems for Big-Data Analytics. (Thesis). UCLA. Retrieved from http://www.escholarship.org/uc/item/1m795574

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Wu, Di. “Scaling Accelerator-Rich Systems for Big-Data Analytics.” 2017. Thesis, UCLA. Accessed March 02, 2021. http://www.escholarship.org/uc/item/1m795574.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Wu, Di. “Scaling Accelerator-Rich Systems for Big-Data Analytics.” 2017. Web. 02 Mar 2021.

Vancouver:

Wu D. Scaling Accelerator-Rich Systems for Big-Data Analytics. [Internet] [Thesis]. UCLA; 2017. [cited 2021 Mar 02]. Available from: http://www.escholarship.org/uc/item/1m795574.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Wu D. Scaling Accelerator-Rich Systems for Big-Data Analytics. [Thesis]. UCLA; 2017. Available from: http://www.escholarship.org/uc/item/1m795574

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Arizona State University

17. Ma, Yufei. Hardware Acceleration of Deep Convolutional Neural Networks on FPGA.

Degree: Electrical Engineering, 2018, Arizona State University

Subjects/Keywords: Electrical engineering; Computer engineering; Artificial intelligence; Computer Vision; Convolutional Neural Networks; FPGA; Hardware Accelerator

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ma, Y. (2018). Hardware Acceleration of Deep Convolutional Neural Networks on FPGA. (Doctoral Dissertation). Arizona State University. Retrieved from http://repository.asu.edu/items/51620

Chicago Manual of Style (16th Edition):

Ma, Yufei. “Hardware Acceleration of Deep Convolutional Neural Networks on FPGA.” 2018. Doctoral Dissertation, Arizona State University. Accessed March 02, 2021. http://repository.asu.edu/items/51620.

MLA Handbook (7th Edition):

Ma, Yufei. “Hardware Acceleration of Deep Convolutional Neural Networks on FPGA.” 2018. Web. 02 Mar 2021.

Vancouver:

Ma Y. Hardware Acceleration of Deep Convolutional Neural Networks on FPGA. [Internet] [Doctoral dissertation]. Arizona State University; 2018. [cited 2021 Mar 02]. Available from: http://repository.asu.edu/items/51620.

Council of Science Editors:

Ma Y. Hardware Acceleration of Deep Convolutional Neural Networks on FPGA. [Doctoral Dissertation]. Arizona State University; 2018. Available from: http://repository.asu.edu/items/51620

18. Saha, Saunak. Towards energy-efficient hardware acceleration of memory-intensive event-driven kernels on a synchronous neuromorphic substrate.

Degree: 2019, Iowa State University

 Spiking neural networks are increasingly becoming popular as low-power alternatives to deep learning architectures. To make edge processing possible in resource-constrained embedded devices, there is… (more)

Subjects/Keywords: Accelerator; Caches; Computational Neuroscience; Energy Efficiency; Neuromorphic; Spiking Neural Network; Artificial Intelligence and Robotics; Computer Engineering; Electrical and Electronics

…x5D; or software around the underlying neural network accelerator. As a direct consequence… …showing an overview of the programming signals to reconfigure the network topology, neural… …forbearance this work could not have been completed. x ABSTRACT Spiking neural networks are… …reconfigurable neuromorphic accelerators that can cater to various topologies and neural dynamics… …different types of spiking neurons and network topologies for efficient inference. The accelerator… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Saha, S. (2019). Towards energy-efficient hardware acceleration of memory-intensive event-driven kernels on a synchronous neuromorphic substrate. (Thesis). Iowa State University. Retrieved from https://lib.dr.iastate.edu/etd/17556

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Saha, Saunak. “Towards energy-efficient hardware acceleration of memory-intensive event-driven kernels on a synchronous neuromorphic substrate.” 2019. Thesis, Iowa State University. Accessed March 02, 2021. https://lib.dr.iastate.edu/etd/17556.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Saha, Saunak. “Towards energy-efficient hardware acceleration of memory-intensive event-driven kernels on a synchronous neuromorphic substrate.” 2019. Web. 02 Mar 2021.

Vancouver:

Saha S. Towards energy-efficient hardware acceleration of memory-intensive event-driven kernels on a synchronous neuromorphic substrate. [Internet] [Thesis]. Iowa State University; 2019. [cited 2021 Mar 02]. Available from: https://lib.dr.iastate.edu/etd/17556.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Saha S. Towards energy-efficient hardware acceleration of memory-intensive event-driven kernels on a synchronous neuromorphic substrate. [Thesis]. Iowa State University; 2019. Available from: https://lib.dr.iastate.edu/etd/17556

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

19. Yazdan Bakhsh, Amir. Neuro-general computing an acceleration-approximation approach.

Degree: PhD, Computer Science, 2018, Georgia Tech

 A growing number of commercial and enterprise systems rely on compute and power intensive tasks. While the demand of these tasks is growing, the performance… (more)

Subjects/Keywords: Approximate computing; Machine learning; Generative adversarial networks; Convolutional neural networks; CNN; Transposed convolution; Access-execute architecture; GAN; DNN; MIMD; SIMD; Accelerator

…Integrating the Neural Accelerator . . . . . . . . . . . . . . . . . . 39 2.5.2 Executing Neurally… …80 3.2 Major GPU, GDDR5, and in-DRAM neural accelerator microarchitectural parameters… …Mixed-signal neural accelerator, A-NPU. Only four of the ANUs are shown. Each ANU processes… …35 2.4 SM pipeline after integrating the neural accelerator within SIMD lanes. The added… …50 2.9 Sensitivity of the total application’s speedup to the neural accelerator delay… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yazdan Bakhsh, A. (2018). Neuro-general computing an acceleration-approximation approach. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/60224

Chicago Manual of Style (16th Edition):

Yazdan Bakhsh, Amir. “Neuro-general computing an acceleration-approximation approach.” 2018. Doctoral Dissertation, Georgia Tech. Accessed March 02, 2021. http://hdl.handle.net/1853/60224.

MLA Handbook (7th Edition):

Yazdan Bakhsh, Amir. “Neuro-general computing an acceleration-approximation approach.” 2018. Web. 02 Mar 2021.

Vancouver:

Yazdan Bakhsh A. Neuro-general computing an acceleration-approximation approach. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2021 Mar 02]. Available from: http://hdl.handle.net/1853/60224.

Council of Science Editors:

Yazdan Bakhsh A. Neuro-general computing an acceleration-approximation approach. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/60224

.