Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(parallelism). Showing records 1 – 30 of 397 total matches.

[1] [2] [3] [4] [5] … [14]

Search Limiters

Last 2 Years | English Only

Degrees

Levels

Languages

Country

▼ Search Limiters


University of Illinois – Urbana-Champaign

1. Brodman, James C. Data parallelism with hierarchically tiled objects.

Degree: PhD, 0112, 2011, University of Illinois – Urbana-Champaign

 Exploiting parallelism in modern machines increases the di culty of developing applications. Thus, new abstractions are needed that facilitate parallel programming and at the same… (more)

Subjects/Keywords: parallelism; parallel programming; data parallelism; tiling

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Brodman, J. C. (2011). Data parallelism with hierarchically tiled objects. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/24316

Chicago Manual of Style (16th Edition):

Brodman, James C. “Data parallelism with hierarchically tiled objects.” 2011. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed September 16, 2019. http://hdl.handle.net/2142/24316.

MLA Handbook (7th Edition):

Brodman, James C. “Data parallelism with hierarchically tiled objects.” 2011. Web. 16 Sep 2019.

Vancouver:

Brodman JC. Data parallelism with hierarchically tiled objects. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2011. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/2142/24316.

Council of Science Editors:

Brodman JC. Data parallelism with hierarchically tiled objects. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2011. Available from: http://hdl.handle.net/2142/24316


University of Rochester

2. Kelsey, Kirk M. Coarse-grained speculative parallelism and optimization.

Degree: PhD, 2011, University of Rochester

 The computing industry has long relied on computation becoming faster through steady exponential growth in the density of transistors on a chip. While the growth… (more)

Subjects/Keywords: Optimization; Parallelism; Speculation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kelsey, K. M. (2011). Coarse-grained speculative parallelism and optimization. (Doctoral Dissertation). University of Rochester. Retrieved from http://hdl.handle.net/1802/16955

Chicago Manual of Style (16th Edition):

Kelsey, Kirk M. “Coarse-grained speculative parallelism and optimization.” 2011. Doctoral Dissertation, University of Rochester. Accessed September 16, 2019. http://hdl.handle.net/1802/16955.

MLA Handbook (7th Edition):

Kelsey, Kirk M. “Coarse-grained speculative parallelism and optimization.” 2011. Web. 16 Sep 2019.

Vancouver:

Kelsey KM. Coarse-grained speculative parallelism and optimization. [Internet] [Doctoral dissertation]. University of Rochester; 2011. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1802/16955.

Council of Science Editors:

Kelsey KM. Coarse-grained speculative parallelism and optimization. [Doctoral Dissertation]. University of Rochester; 2011. Available from: http://hdl.handle.net/1802/16955


Texas A&M University

3. Fatehi, Ehsan. ILP and TLP in Shared Memory Applications: A Limit Study.

Degree: 2015, Texas A&M University

 The work in this dissertation explores the limits of Chip-multiprocessors (CMPs) with respect to shared-memory, multi-threaded benchmarks, which will help aid in identifying microarchitectural bottlenecks.… (more)

Subjects/Keywords: instruction-level parallelism; limits parallelism; concurrency pthreads; thread-level parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Fatehi, E. (2015). ILP and TLP in Shared Memory Applications: A Limit Study. (Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/155119

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Fatehi, Ehsan. “ILP and TLP in Shared Memory Applications: A Limit Study.” 2015. Thesis, Texas A&M University. Accessed September 16, 2019. http://hdl.handle.net/1969.1/155119.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Fatehi, Ehsan. “ILP and TLP in Shared Memory Applications: A Limit Study.” 2015. Web. 16 Sep 2019.

Vancouver:

Fatehi E. ILP and TLP in Shared Memory Applications: A Limit Study. [Internet] [Thesis]. Texas A&M University; 2015. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1969.1/155119.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Fatehi E. ILP and TLP in Shared Memory Applications: A Limit Study. [Thesis]. Texas A&M University; 2015. Available from: http://hdl.handle.net/1969.1/155119

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Georgia

4. Bhattiprolu, Srikanth. TAP : A tool for evaluating different processor assignments in task and data parallel programs.

Degree: MS, Computer Science, 2001, University of Georgia

 A parallel program is usually written using either data parallelism or task parallelism. With data parallelism, each processor executes the same code but operates on… (more)

Subjects/Keywords: task parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bhattiprolu, S. (2001). TAP : A tool for evaluating different processor assignments in task and data parallel programs. (Masters Thesis). University of Georgia. Retrieved from http://purl.galileo.usg.edu/uga_etd/bhattiprolu_srikanth_200108_ms

Chicago Manual of Style (16th Edition):

Bhattiprolu, Srikanth. “TAP : A tool for evaluating different processor assignments in task and data parallel programs.” 2001. Masters Thesis, University of Georgia. Accessed September 16, 2019. http://purl.galileo.usg.edu/uga_etd/bhattiprolu_srikanth_200108_ms.

MLA Handbook (7th Edition):

Bhattiprolu, Srikanth. “TAP : A tool for evaluating different processor assignments in task and data parallel programs.” 2001. Web. 16 Sep 2019.

Vancouver:

Bhattiprolu S. TAP : A tool for evaluating different processor assignments in task and data parallel programs. [Internet] [Masters thesis]. University of Georgia; 2001. [cited 2019 Sep 16]. Available from: http://purl.galileo.usg.edu/uga_etd/bhattiprolu_srikanth_200108_ms.

Council of Science Editors:

Bhattiprolu S. TAP : A tool for evaluating different processor assignments in task and data parallel programs. [Masters Thesis]. University of Georgia; 2001. Available from: http://purl.galileo.usg.edu/uga_etd/bhattiprolu_srikanth_200108_ms

5. Powell, Daniel Christopher. Lightweight speculative support for aggressive auto-parallelisation tools.

Degree: PhD, 2015, University of Edinburgh

 With the recent move to multi-core architectures it has become important to create the means to exploit the performance made available to us by these… (more)

Subjects/Keywords: 005.2; parallelism; speculative execution

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Powell, D. C. (2015). Lightweight speculative support for aggressive auto-parallelisation tools. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/10566

Chicago Manual of Style (16th Edition):

Powell, Daniel Christopher. “Lightweight speculative support for aggressive auto-parallelisation tools.” 2015. Doctoral Dissertation, University of Edinburgh. Accessed September 16, 2019. http://hdl.handle.net/1842/10566.

MLA Handbook (7th Edition):

Powell, Daniel Christopher. “Lightweight speculative support for aggressive auto-parallelisation tools.” 2015. Web. 16 Sep 2019.

Vancouver:

Powell DC. Lightweight speculative support for aggressive auto-parallelisation tools. [Internet] [Doctoral dissertation]. University of Edinburgh; 2015. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1842/10566.

Council of Science Editors:

Powell DC. Lightweight speculative support for aggressive auto-parallelisation tools. [Doctoral Dissertation]. University of Edinburgh; 2015. Available from: http://hdl.handle.net/1842/10566


Penn State University

6. Yedlapalli, Praveen. A Study of Parallelism-locality Tradeoffs across Memory Hierarchy.

Degree: PhD, Computer Science and Engineering, 2015, Penn State University

 As the number of cores on a chip increases, the memory bandwidth requirements become a scalability issue. Current CMPs incorporate multiple resources both on-chip and… (more)

Subjects/Keywords: Memory; SOC; parallelism; locality

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yedlapalli, P. (2015). A Study of Parallelism-locality Tradeoffs across Memory Hierarchy. (Doctoral Dissertation). Penn State University. Retrieved from https://etda.libraries.psu.edu/catalog/26536

Chicago Manual of Style (16th Edition):

Yedlapalli, Praveen. “A Study of Parallelism-locality Tradeoffs across Memory Hierarchy.” 2015. Doctoral Dissertation, Penn State University. Accessed September 16, 2019. https://etda.libraries.psu.edu/catalog/26536.

MLA Handbook (7th Edition):

Yedlapalli, Praveen. “A Study of Parallelism-locality Tradeoffs across Memory Hierarchy.” 2015. Web. 16 Sep 2019.

Vancouver:

Yedlapalli P. A Study of Parallelism-locality Tradeoffs across Memory Hierarchy. [Internet] [Doctoral dissertation]. Penn State University; 2015. [cited 2019 Sep 16]. Available from: https://etda.libraries.psu.edu/catalog/26536.

Council of Science Editors:

Yedlapalli P. A Study of Parallelism-locality Tradeoffs across Memory Hierarchy. [Doctoral Dissertation]. Penn State University; 2015. Available from: https://etda.libraries.psu.edu/catalog/26536


Cornell University

7. Tian, Yuan. A Parallel Implementation Of Hierarchical Belief Propagation .

Degree: 2013, Cornell University

 Though Belief Propagation (BP) algorithms generate high quality results for a wide range of Markov Random Field (MRF) formulated energy minimization problems, they require large… (more)

Subjects/Keywords: Belief Propagation; Graphical Models; Parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tian, Y. (2013). A Parallel Implementation Of Hierarchical Belief Propagation . (Thesis). Cornell University. Retrieved from http://hdl.handle.net/1813/34099

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Tian, Yuan. “A Parallel Implementation Of Hierarchical Belief Propagation .” 2013. Thesis, Cornell University. Accessed September 16, 2019. http://hdl.handle.net/1813/34099.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Tian, Yuan. “A Parallel Implementation Of Hierarchical Belief Propagation .” 2013. Web. 16 Sep 2019.

Vancouver:

Tian Y. A Parallel Implementation Of Hierarchical Belief Propagation . [Internet] [Thesis]. Cornell University; 2013. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1813/34099.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Tian Y. A Parallel Implementation Of Hierarchical Belief Propagation . [Thesis]. Cornell University; 2013. Available from: http://hdl.handle.net/1813/34099

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Uppsala University

8. Karlsson, Johan. Efficient use of Multi-core Technology in Interactive Desktop Applications.

Degree: Information Technology, 2015, Uppsala University

  The emergence of multi-core processors has successfully ended the era where applications could enjoy free and regular performance improvements without source code modifications. This… (more)

Subjects/Keywords: Multi-core processors; parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Karlsson, J. (2015). Efficient use of Multi-core Technology in Interactive Desktop Applications. (Thesis). Uppsala University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-246120

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Karlsson, Johan. “Efficient use of Multi-core Technology in Interactive Desktop Applications.” 2015. Thesis, Uppsala University. Accessed September 16, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-246120.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Karlsson, Johan. “Efficient use of Multi-core Technology in Interactive Desktop Applications.” 2015. Web. 16 Sep 2019.

Vancouver:

Karlsson J. Efficient use of Multi-core Technology in Interactive Desktop Applications. [Internet] [Thesis]. Uppsala University; 2015. [cited 2019 Sep 16]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-246120.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Karlsson J. Efficient use of Multi-core Technology in Interactive Desktop Applications. [Thesis]. Uppsala University; 2015. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-246120

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Waterloo

9. Pérez Gavilán Torres, Camila María. Performance of the Ultra-Wide Word Model.

Degree: 2017, University of Waterloo

 The Ultra-wide word model of computation (UWRAM) is an extension of the Word-RAM model which has an ALU that can operate on w2 bits at… (more)

Subjects/Keywords: uwram; algorithms; architecture; model; parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Pérez Gavilán Torres, C. M. (2017). Performance of the Ultra-Wide Word Model. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/12349

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Pérez Gavilán Torres, Camila María. “Performance of the Ultra-Wide Word Model.” 2017. Thesis, University of Waterloo. Accessed September 16, 2019. http://hdl.handle.net/10012/12349.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Pérez Gavilán Torres, Camila María. “Performance of the Ultra-Wide Word Model.” 2017. Web. 16 Sep 2019.

Vancouver:

Pérez Gavilán Torres CM. Performance of the Ultra-Wide Word Model. [Internet] [Thesis]. University of Waterloo; 2017. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/10012/12349.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Pérez Gavilán Torres CM. Performance of the Ultra-Wide Word Model. [Thesis]. University of Waterloo; 2017. Available from: http://hdl.handle.net/10012/12349

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Rice University

10. Sharman, Jonathan. Exploring Tradeoffs in Parallel Implementations of C++ using Futures.

Degree: MS, Engineering, 2017, Rice University

 As the degree of hardware concurrency continues to rise, multi-core programming becomes increasingly important for the development of high-performance code. Parallel futures are a safe,… (more)

Subjects/Keywords: C++; parallelism; HPC; futures; Fibertures

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Sharman, J. (2017). Exploring Tradeoffs in Parallel Implementations of C++ using Futures. (Masters Thesis). Rice University. Retrieved from http://hdl.handle.net/1911/105498

Chicago Manual of Style (16th Edition):

Sharman, Jonathan. “Exploring Tradeoffs in Parallel Implementations of C++ using Futures.” 2017. Masters Thesis, Rice University. Accessed September 16, 2019. http://hdl.handle.net/1911/105498.

MLA Handbook (7th Edition):

Sharman, Jonathan. “Exploring Tradeoffs in Parallel Implementations of C++ using Futures.” 2017. Web. 16 Sep 2019.

Vancouver:

Sharman J. Exploring Tradeoffs in Parallel Implementations of C++ using Futures. [Internet] [Masters thesis]. Rice University; 2017. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1911/105498.

Council of Science Editors:

Sharman J. Exploring Tradeoffs in Parallel Implementations of C++ using Futures. [Masters Thesis]. Rice University; 2017. Available from: http://hdl.handle.net/1911/105498


University of Arizona

11. Gaska, Benjamin James. ParForPy: Loop Parallelism in Python .

Degree: 2017, University of Arizona

 Scientists are trending towards usage of high-level programming languages such as Python. The convenience of these languages often have a performance cost. As the amount… (more)

Subjects/Keywords: Parallelism; Programming Languages; Python

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gaska, B. J. (2017). ParForPy: Loop Parallelism in Python . (Masters Thesis). University of Arizona. Retrieved from http://hdl.handle.net/10150/625320

Chicago Manual of Style (16th Edition):

Gaska, Benjamin James. “ParForPy: Loop Parallelism in Python .” 2017. Masters Thesis, University of Arizona. Accessed September 16, 2019. http://hdl.handle.net/10150/625320.

MLA Handbook (7th Edition):

Gaska, Benjamin James. “ParForPy: Loop Parallelism in Python .” 2017. Web. 16 Sep 2019.

Vancouver:

Gaska BJ. ParForPy: Loop Parallelism in Python . [Internet] [Masters thesis]. University of Arizona; 2017. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/10150/625320.

Council of Science Editors:

Gaska BJ. ParForPy: Loop Parallelism in Python . [Masters Thesis]. University of Arizona; 2017. Available from: http://hdl.handle.net/10150/625320


University of Manitoba

12. Nemes, Jordan. Root parallelism in Invisalign® treatment.

Degree: Preventive Dental Science, 2015, University of Manitoba

 AIM: To assess root parallelism after Invisalign® treatment. MATERIALS AND METHODS: The sample consisted of 101 patients (mean age: 22.7 years, 29 males, 72 females)… (more)

Subjects/Keywords: Invisalign; Root; Parallelism; Orthodontic

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nemes, J. (2015). Root parallelism in Invisalign® treatment. (Masters Thesis). University of Manitoba. Retrieved from http://hdl.handle.net/1993/31259

Chicago Manual of Style (16th Edition):

Nemes, Jordan. “Root parallelism in Invisalign® treatment.” 2015. Masters Thesis, University of Manitoba. Accessed September 16, 2019. http://hdl.handle.net/1993/31259.

MLA Handbook (7th Edition):

Nemes, Jordan. “Root parallelism in Invisalign® treatment.” 2015. Web. 16 Sep 2019.

Vancouver:

Nemes J. Root parallelism in Invisalign® treatment. [Internet] [Masters thesis]. University of Manitoba; 2015. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1993/31259.

Council of Science Editors:

Nemes J. Root parallelism in Invisalign® treatment. [Masters Thesis]. University of Manitoba; 2015. Available from: http://hdl.handle.net/1993/31259


Northeastern University

13. Momeni, Amir. Exploiting thread-level parallelism on reconfigurable architectures: a cross-layer approach.

Degree: PhD, Department of Electrical and Computer Engineering, 2017, Northeastern University

 Field Programmable Gate Arrays (FPGAs) are one major class of architectures commonly used in parallel computing systems. FPGAs provide a massive number (i.e., millions) programmable… (more)

Subjects/Keywords: FPGA; GPU; OpenCL; parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Momeni, A. (2017). Exploiting thread-level parallelism on reconfigurable architectures: a cross-layer approach. (Doctoral Dissertation). Northeastern University. Retrieved from http://hdl.handle.net/2047/D20254348

Chicago Manual of Style (16th Edition):

Momeni, Amir. “Exploiting thread-level parallelism on reconfigurable architectures: a cross-layer approach.” 2017. Doctoral Dissertation, Northeastern University. Accessed September 16, 2019. http://hdl.handle.net/2047/D20254348.

MLA Handbook (7th Edition):

Momeni, Amir. “Exploiting thread-level parallelism on reconfigurable architectures: a cross-layer approach.” 2017. Web. 16 Sep 2019.

Vancouver:

Momeni A. Exploiting thread-level parallelism on reconfigurable architectures: a cross-layer approach. [Internet] [Doctoral dissertation]. Northeastern University; 2017. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/2047/D20254348.

Council of Science Editors:

Momeni A. Exploiting thread-level parallelism on reconfigurable architectures: a cross-layer approach. [Doctoral Dissertation]. Northeastern University; 2017. Available from: http://hdl.handle.net/2047/D20254348


Northeastern University

14. Huang, Kai. K-means parallelism on FPGA.

Degree: MS, Department of Electrical and Computer Engineering, 2018, Northeastern University

 The K-means algorithm, which partitions observations into different clusters, is often used for extremely large dataset analysis in data mining. A big issue with K-means… (more)

Subjects/Keywords: FPGA; K-means; parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Huang, K. (2018). K-means parallelism on FPGA. (Masters Thesis). Northeastern University. Retrieved from http://hdl.handle.net/2047/D20289338

Chicago Manual of Style (16th Edition):

Huang, Kai. “K-means parallelism on FPGA.” 2018. Masters Thesis, Northeastern University. Accessed September 16, 2019. http://hdl.handle.net/2047/D20289338.

MLA Handbook (7th Edition):

Huang, Kai. “K-means parallelism on FPGA.” 2018. Web. 16 Sep 2019.

Vancouver:

Huang K. K-means parallelism on FPGA. [Internet] [Masters thesis]. Northeastern University; 2018. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/2047/D20289338.

Council of Science Editors:

Huang K. K-means parallelism on FPGA. [Masters Thesis]. Northeastern University; 2018. Available from: http://hdl.handle.net/2047/D20289338


The Ohio State University

15. Van Valkenburgh, Kevin. Measuring and Improving the Potential Parallelism of Sequential Java Programs.

Degree: MS, Computer Science and Engineering, 2009, The Ohio State University

 There is a growing need for parallel algorithms and their implementations, due to the continued rise in the use of multi-core machines. When trying to… (more)

Subjects/Keywords: Computer Science; Java; parallelism; multi-core; potential parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Van Valkenburgh, K. (2009). Measuring and Improving the Potential Parallelism of Sequential Java Programs. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1250594496

Chicago Manual of Style (16th Edition):

Van Valkenburgh, Kevin. “Measuring and Improving the Potential Parallelism of Sequential Java Programs.” 2009. Masters Thesis, The Ohio State University. Accessed September 16, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1250594496.

MLA Handbook (7th Edition):

Van Valkenburgh, Kevin. “Measuring and Improving the Potential Parallelism of Sequential Java Programs.” 2009. Web. 16 Sep 2019.

Vancouver:

Van Valkenburgh K. Measuring and Improving the Potential Parallelism of Sequential Java Programs. [Internet] [Masters thesis]. The Ohio State University; 2009. [cited 2019 Sep 16]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1250594496.

Council of Science Editors:

Van Valkenburgh K. Measuring and Improving the Potential Parallelism of Sequential Java Programs. [Masters Thesis]. The Ohio State University; 2009. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1250594496


Queensland University of Technology

16. Craik, Andrew. A framework for reasoning about inherent parallelism in modern object-oriented languages.

Degree: 2011, Queensland University of Technology

 With the emergence of multi-core processors into the mainstream, parallel programming is no longer the specialized domain it once was. There is a growing need… (more)

Subjects/Keywords: programming languages; Ownership Types; parallelization; inherent parallelism; conditional parallelism; effect system

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Craik, A. (2011). A framework for reasoning about inherent parallelism in modern object-oriented languages. (Thesis). Queensland University of Technology. Retrieved from https://eprints.qut.edu.au/40877/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Craik, Andrew. “A framework for reasoning about inherent parallelism in modern object-oriented languages.” 2011. Thesis, Queensland University of Technology. Accessed September 16, 2019. https://eprints.qut.edu.au/40877/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Craik, Andrew. “A framework for reasoning about inherent parallelism in modern object-oriented languages.” 2011. Web. 16 Sep 2019.

Vancouver:

Craik A. A framework for reasoning about inherent parallelism in modern object-oriented languages. [Internet] [Thesis]. Queensland University of Technology; 2011. [cited 2019 Sep 16]. Available from: https://eprints.qut.edu.au/40877/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Craik A. A framework for reasoning about inherent parallelism in modern object-oriented languages. [Thesis]. Queensland University of Technology; 2011. Available from: https://eprints.qut.edu.au/40877/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Universidade Nova

17. Delgado, Nuno Miguel de Brito. A system’s approach to cache hierarchy-aware decomposition of data-parallel computations.

Degree: 2014, Universidade Nova

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

The architecture of nowadays’ processors is very complex, comprising several computational cores and an intricate… (more)

Subjects/Keywords: Data-parallelism; Hierarchical parallelism; Domain decomposition; Runtime systems

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Delgado, N. M. d. B. (2014). A system’s approach to cache hierarchy-aware decomposition of data-parallel computations. (Thesis). Universidade Nova. Retrieved from http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/13014

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Delgado, Nuno Miguel de Brito. “A system’s approach to cache hierarchy-aware decomposition of data-parallel computations.” 2014. Thesis, Universidade Nova. Accessed September 16, 2019. http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/13014.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Delgado, Nuno Miguel de Brito. “A system’s approach to cache hierarchy-aware decomposition of data-parallel computations.” 2014. Web. 16 Sep 2019.

Vancouver:

Delgado NMdB. A system’s approach to cache hierarchy-aware decomposition of data-parallel computations. [Internet] [Thesis]. Universidade Nova; 2014. [cited 2019 Sep 16]. Available from: http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/13014.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Delgado NMdB. A system’s approach to cache hierarchy-aware decomposition of data-parallel computations. [Thesis]. Universidade Nova; 2014. Available from: http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/13014

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Minnesota

18. Kolpe, Tejaswini. Power management in multicore processors through clustered DVFS.

Degree: MS, Electrical Engineering, 2010, University of Minnesota

University of Minnesota M.S. thesis. July 2010. Major: Electrical Engineering. Advisor: Sachin Suresh Sapatnekar. 1 computer file (PDF); viii, 57 pages. Ill. (some col.)

The… (more)

Subjects/Keywords: Cores; Processors; Clusters; Parallelism; Electrical Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kolpe, T. (2010). Power management in multicore processors through clustered DVFS. (Masters Thesis). University of Minnesota. Retrieved from http://purl.umn.edu/93628

Chicago Manual of Style (16th Edition):

Kolpe, Tejaswini. “Power management in multicore processors through clustered DVFS.” 2010. Masters Thesis, University of Minnesota. Accessed September 16, 2019. http://purl.umn.edu/93628.

MLA Handbook (7th Edition):

Kolpe, Tejaswini. “Power management in multicore processors through clustered DVFS.” 2010. Web. 16 Sep 2019.

Vancouver:

Kolpe T. Power management in multicore processors through clustered DVFS. [Internet] [Masters thesis]. University of Minnesota; 2010. [cited 2019 Sep 16]. Available from: http://purl.umn.edu/93628.

Council of Science Editors:

Kolpe T. Power management in multicore processors through clustered DVFS. [Masters Thesis]. University of Minnesota; 2010. Available from: http://purl.umn.edu/93628

19. Johnell, Carl. Parallel programming in Go and Scala : A performance comparison.

Degree: 2015, , Department of Software Engineering

      This thesis provides a performance comparison of parallel programming in Go and Scala. Go supports concurrency through goroutines and channels. Scala have… (more)

Subjects/Keywords: Go; Scala; parallelism; concurrency; Software Engineering; Programvaruteknik

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Johnell, C. (2015). Parallel programming in Go and Scala : A performance comparison. (Thesis). , Department of Software Engineering. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:bth-996

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Johnell, Carl. “Parallel programming in Go and Scala : A performance comparison.” 2015. Thesis, , Department of Software Engineering. Accessed September 16, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-996.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Johnell, Carl. “Parallel programming in Go and Scala : A performance comparison.” 2015. Web. 16 Sep 2019.

Vancouver:

Johnell C. Parallel programming in Go and Scala : A performance comparison. [Internet] [Thesis]. , Department of Software Engineering; 2015. [cited 2019 Sep 16]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-996.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Johnell C. Parallel programming in Go and Scala : A performance comparison. [Thesis]. , Department of Software Engineering; 2015. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-996

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


EPFL

20. Rompf, Tiark. Lightweight Modular Staging and Embedded Compilers: Abstraction without Regret for High-Level High-Performance Programming.

Degree: 2012, EPFL

 Programs expressed in a high-level programming language need to be translated to a low-level machine dialect for execution. This translation is usually accomplished by a… (more)

Subjects/Keywords: Programming Languages; Compilers; Staging; Performance; Parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Rompf, T. (2012). Lightweight Modular Staging and Embedded Compilers: Abstraction without Regret for High-Level High-Performance Programming. (Thesis). EPFL. Retrieved from http://infoscience.epfl.ch/record/180642

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Rompf, Tiark. “Lightweight Modular Staging and Embedded Compilers: Abstraction without Regret for High-Level High-Performance Programming.” 2012. Thesis, EPFL. Accessed September 16, 2019. http://infoscience.epfl.ch/record/180642.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Rompf, Tiark. “Lightweight Modular Staging and Embedded Compilers: Abstraction without Regret for High-Level High-Performance Programming.” 2012. Web. 16 Sep 2019.

Vancouver:

Rompf T. Lightweight Modular Staging and Embedded Compilers: Abstraction without Regret for High-Level High-Performance Programming. [Internet] [Thesis]. EPFL; 2012. [cited 2019 Sep 16]. Available from: http://infoscience.epfl.ch/record/180642.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Rompf T. Lightweight Modular Staging and Embedded Compilers: Abstraction without Regret for High-Level High-Performance Programming. [Thesis]. EPFL; 2012. Available from: http://infoscience.epfl.ch/record/180642

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Georgia Tech

21. Wang, Jin. Acceleration and optimization of dynamic parallelism for irregular applications on GPUs.

Degree: PhD, Electrical and Computer Engineering, 2016, Georgia Tech

 The objective of this thesis is the development, implementation and optimization of a GPU execution model extension that efficiently supports time-varying, nested, fine-grained dynamic parallelism(more)

Subjects/Keywords: General-purpose GPU; Dynamic parallelism; Irregular applications

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wang, J. (2016). Acceleration and optimization of dynamic parallelism for irregular applications on GPUs. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/56294

Chicago Manual of Style (16th Edition):

Wang, Jin. “Acceleration and optimization of dynamic parallelism for irregular applications on GPUs.” 2016. Doctoral Dissertation, Georgia Tech. Accessed September 16, 2019. http://hdl.handle.net/1853/56294.

MLA Handbook (7th Edition):

Wang, Jin. “Acceleration and optimization of dynamic parallelism for irregular applications on GPUs.” 2016. Web. 16 Sep 2019.

Vancouver:

Wang J. Acceleration and optimization of dynamic parallelism for irregular applications on GPUs. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1853/56294.

Council of Science Editors:

Wang J. Acceleration and optimization of dynamic parallelism for irregular applications on GPUs. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/56294

22. Srivastava, Pallavi. Exploring model parallelism in distributed scheduling of neural network frameworks.

Degree: MS, Computer Science, 2018, University of Illinois – Urbana-Champaign

 The growth in size and computational requirements in training Neural Networks (NN) over the past few years has led to an increase in their sizes.… (more)

Subjects/Keywords: model parallelism

…detail in chapter 2. Recent research has led to multiple developments in data parallelism [… …TensorFlow (TF) [8] [15]. While data parallelism is an effective way… …ever increasing size of the models themselves as they become more complex. Model parallelism… …data). Model parallelism can prove to be an indispensable tool for NN models which are… …preliminaries as well as provide background on Machine Learning systems and model parallelism. 2. In… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Srivastava, P. (2018). Exploring model parallelism in distributed scheduling of neural network frameworks. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/101625

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Srivastava, Pallavi. “Exploring model parallelism in distributed scheduling of neural network frameworks.” 2018. Thesis, University of Illinois – Urbana-Champaign. Accessed September 16, 2019. http://hdl.handle.net/2142/101625.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Srivastava, Pallavi. “Exploring model parallelism in distributed scheduling of neural network frameworks.” 2018. Web. 16 Sep 2019.

Vancouver:

Srivastava P. Exploring model parallelism in distributed scheduling of neural network frameworks. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2018. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/2142/101625.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Srivastava P. Exploring model parallelism in distributed scheduling of neural network frameworks. [Thesis]. University of Illinois – Urbana-Champaign; 2018. Available from: http://hdl.handle.net/2142/101625

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Colorado

23. Price, Graham David. Dynamic Trace Analysis with Zero-Suppressed BDDs.

Degree: PhD, Electrical, Computer & Energy Engineering, 2011, University of Colorado

  Instruction level parallelism (ILP) limitations have forced processor manufacturers to develop multi-core platforms with the expectation that programs will be able to exploit thread… (more)

Subjects/Keywords: Dynamic Program Analysis; Parallelism; Computer Sciences; Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Price, G. D. (2011). Dynamic Trace Analysis with Zero-Suppressed BDDs. (Doctoral Dissertation). University of Colorado. Retrieved from https://scholar.colorado.edu/ecen_gradetds/27

Chicago Manual of Style (16th Edition):

Price, Graham David. “Dynamic Trace Analysis with Zero-Suppressed BDDs.” 2011. Doctoral Dissertation, University of Colorado. Accessed September 16, 2019. https://scholar.colorado.edu/ecen_gradetds/27.

MLA Handbook (7th Edition):

Price, Graham David. “Dynamic Trace Analysis with Zero-Suppressed BDDs.” 2011. Web. 16 Sep 2019.

Vancouver:

Price GD. Dynamic Trace Analysis with Zero-Suppressed BDDs. [Internet] [Doctoral dissertation]. University of Colorado; 2011. [cited 2019 Sep 16]. Available from: https://scholar.colorado.edu/ecen_gradetds/27.

Council of Science Editors:

Price GD. Dynamic Trace Analysis with Zero-Suppressed BDDs. [Doctoral Dissertation]. University of Colorado; 2011. Available from: https://scholar.colorado.edu/ecen_gradetds/27


Virginia Tech

24. Hou, Kaixi. Exploring Performance Portability for Accelerators via High-level Parallel Patterns.

Degree: PhD, Computer Science, 2018, Virginia Tech

 Nowadays, parallel accelerators have become prominent and ubiquitous, e.g., multi-core CPUs, many-core GPUs (Graphics Processing Units) and Intel Xeon Phi. The performance gains from them… (more)

Subjects/Keywords: GPU; AVX; sort; stencil; wavefront; pattern; parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hou, K. (2018). Exploring Performance Portability for Accelerators via High-level Parallel Patterns. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/84923

Chicago Manual of Style (16th Edition):

Hou, Kaixi. “Exploring Performance Portability for Accelerators via High-level Parallel Patterns.” 2018. Doctoral Dissertation, Virginia Tech. Accessed September 16, 2019. http://hdl.handle.net/10919/84923.

MLA Handbook (7th Edition):

Hou, Kaixi. “Exploring Performance Portability for Accelerators via High-level Parallel Patterns.” 2018. Web. 16 Sep 2019.

Vancouver:

Hou K. Exploring Performance Portability for Accelerators via High-level Parallel Patterns. [Internet] [Doctoral dissertation]. Virginia Tech; 2018. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/10919/84923.

Council of Science Editors:

Hou K. Exploring Performance Portability for Accelerators via High-level Parallel Patterns. [Doctoral Dissertation]. Virginia Tech; 2018. Available from: http://hdl.handle.net/10919/84923


Virginia Tech

25. Lee, Jong-Suk Mark. FleXilicon: a New Coarse-grained Reconfigurable Architecture for Multimedia and Wireless Communications.

Degree: PhD, Electrical and Computer Engineering, 2010, Virginia Tech

 High computing power and flexibility are important design factors for multimedia and wireless communication applications due to the demand for high quality services and frequent… (more)

Subjects/Keywords: Loop-level parallelism; Reconfigurable architecture; array processing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lee, J. M. (2010). FleXilicon: a New Coarse-grained Reconfigurable Architecture for Multimedia and Wireless Communications. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/77094

Chicago Manual of Style (16th Edition):

Lee, Jong-Suk Mark. “FleXilicon: a New Coarse-grained Reconfigurable Architecture for Multimedia and Wireless Communications.” 2010. Doctoral Dissertation, Virginia Tech. Accessed September 16, 2019. http://hdl.handle.net/10919/77094.

MLA Handbook (7th Edition):

Lee, Jong-Suk Mark. “FleXilicon: a New Coarse-grained Reconfigurable Architecture for Multimedia and Wireless Communications.” 2010. Web. 16 Sep 2019.

Vancouver:

Lee JM. FleXilicon: a New Coarse-grained Reconfigurable Architecture for Multimedia and Wireless Communications. [Internet] [Doctoral dissertation]. Virginia Tech; 2010. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/10919/77094.

Council of Science Editors:

Lee JM. FleXilicon: a New Coarse-grained Reconfigurable Architecture for Multimedia and Wireless Communications. [Doctoral Dissertation]. Virginia Tech; 2010. Available from: http://hdl.handle.net/10919/77094


Boise State University

26. Sutton, Christopher Robert. Performance, Scalability, and Robustness in Distributed File Tree Copy.

Degree: 2018, Boise State University

 As storage needs continually increase, and network file systems become more common, the need arises for tools that efficiently copy to and from these types… (more)

Subjects/Keywords: parallelism; performance; scalability; robustness; Software Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Sutton, C. R. (2018). Performance, Scalability, and Robustness in Distributed File Tree Copy. (Thesis). Boise State University. Retrieved from https://scholarworks.boisestate.edu/td/1444

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Sutton, Christopher Robert. “Performance, Scalability, and Robustness in Distributed File Tree Copy.” 2018. Thesis, Boise State University. Accessed September 16, 2019. https://scholarworks.boisestate.edu/td/1444.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Sutton, Christopher Robert. “Performance, Scalability, and Robustness in Distributed File Tree Copy.” 2018. Web. 16 Sep 2019.

Vancouver:

Sutton CR. Performance, Scalability, and Robustness in Distributed File Tree Copy. [Internet] [Thesis]. Boise State University; 2018. [cited 2019 Sep 16]. Available from: https://scholarworks.boisestate.edu/td/1444.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Sutton CR. Performance, Scalability, and Robustness in Distributed File Tree Copy. [Thesis]. Boise State University; 2018. Available from: https://scholarworks.boisestate.edu/td/1444

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Cornell University

27. Kulkarni, Milind Vidyadhar. The Galois System: Optimistic Parallelization of Irregular Programs .

Degree: 2008, Cornell University

 The last several years have seen multicore architectures become ascendant in the computing world. As a result, it is no longer sufficient to rely on… (more)

Subjects/Keywords: parallelism; irregular programs; abstractions; automatic parallelization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kulkarni, M. V. (2008). The Galois System: Optimistic Parallelization of Irregular Programs . (Thesis). Cornell University. Retrieved from http://hdl.handle.net/1813/11139

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Kulkarni, Milind Vidyadhar. “The Galois System: Optimistic Parallelization of Irregular Programs .” 2008. Thesis, Cornell University. Accessed September 16, 2019. http://hdl.handle.net/1813/11139.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Kulkarni, Milind Vidyadhar. “The Galois System: Optimistic Parallelization of Irregular Programs .” 2008. Web. 16 Sep 2019.

Vancouver:

Kulkarni MV. The Galois System: Optimistic Parallelization of Irregular Programs . [Internet] [Thesis]. Cornell University; 2008. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1813/11139.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Kulkarni MV. The Galois System: Optimistic Parallelization of Irregular Programs . [Thesis]. Cornell University; 2008. Available from: http://hdl.handle.net/1813/11139

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Princeton University

28. Oh, Taewook. Automatic Exploitation of Input Parallelism .

Degree: PhD, 2015, Princeton University

Parallelism may reside in the input of a program rather than the program itself. A script interpreter, for example, is hard to parallelize because its… (more)

Subjects/Keywords: Automatic Parallelization; Input Parallelism; Program Specialization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Oh, T. (2015). Automatic Exploitation of Input Parallelism . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp018336h4299

Chicago Manual of Style (16th Edition):

Oh, Taewook. “Automatic Exploitation of Input Parallelism .” 2015. Doctoral Dissertation, Princeton University. Accessed September 16, 2019. http://arks.princeton.edu/ark:/88435/dsp018336h4299.

MLA Handbook (7th Edition):

Oh, Taewook. “Automatic Exploitation of Input Parallelism .” 2015. Web. 16 Sep 2019.

Vancouver:

Oh T. Automatic Exploitation of Input Parallelism . [Internet] [Doctoral dissertation]. Princeton University; 2015. [cited 2019 Sep 16]. Available from: http://arks.princeton.edu/ark:/88435/dsp018336h4299.

Council of Science Editors:

Oh T. Automatic Exploitation of Input Parallelism . [Doctoral Dissertation]. Princeton University; 2015. Available from: http://arks.princeton.edu/ark:/88435/dsp018336h4299


University of Illinois – Urbana-Champaign

29. El Hajj, Izzat. Techniques for optimizing dynamic parallelism on graphics processing units.

Degree: PhD, Electrical & Computer Engr, 2018, University of Illinois – Urbana-Champaign

 Dynamic parallelism is a feature of general purpose graphics processing units (GPUs) whereby threads running on a GPU can spawn other threads without CPU intervention.… (more)

Subjects/Keywords: Graphics Processing Units; Dynamic Parallelism; Compilers; CUDA

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

El Hajj, I. (2018). Techniques for optimizing dynamic parallelism on graphics processing units. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/102488

Chicago Manual of Style (16th Edition):

El Hajj, Izzat. “Techniques for optimizing dynamic parallelism on graphics processing units.” 2018. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed September 16, 2019. http://hdl.handle.net/2142/102488.

MLA Handbook (7th Edition):

El Hajj, Izzat. “Techniques for optimizing dynamic parallelism on graphics processing units.” 2018. Web. 16 Sep 2019.

Vancouver:

El Hajj I. Techniques for optimizing dynamic parallelism on graphics processing units. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2018. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/2142/102488.

Council of Science Editors:

El Hajj I. Techniques for optimizing dynamic parallelism on graphics processing units. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2018. Available from: http://hdl.handle.net/2142/102488


University of Waterloo

30. Delisle, Thierry. Concurrency in C∀.

Degree: 2018, University of Waterloo

 C∀ is a modern, non-object-oriented extension of the C programming language. This thesis serves as a definition and an implementation for the concurrency and parallelism(more)

Subjects/Keywords: programming language; concurrency and parallelism; threading; C∀

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Delisle, T. (2018). Concurrency in C∀. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/12888

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Delisle, Thierry. “Concurrency in C∀.” 2018. Thesis, University of Waterloo. Accessed September 16, 2019. http://hdl.handle.net/10012/12888.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Delisle, Thierry. “Concurrency in C∀.” 2018. Web. 16 Sep 2019.

Vancouver:

Delisle T. Concurrency in C∀. [Internet] [Thesis]. University of Waterloo; 2018. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/10012/12888.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Delisle T. Concurrency in C∀. [Thesis]. University of Waterloo; 2018. Available from: http://hdl.handle.net/10012/12888

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

[1] [2] [3] [4] [5] … [14]

.