Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Parallelism). Showing records 1 – 30 of 458 total matches.

[1] [2] [3] [4] [5] … [16]

Search Limiters

Last 2 Years | English Only

Degrees

Levels

Languages

Country

▼ Search Limiters


University of Illinois – Urbana-Champaign

1. Brodman, James C. Data parallelism with hierarchically tiled objects.

Degree: PhD, 0112, 2011, University of Illinois – Urbana-Champaign

 Exploiting parallelism in modern machines increases the di culty of developing applications. Thus, new abstractions are needed that facilitate parallel programming and at the same… (more)

Subjects/Keywords: parallelism; parallel programming; data parallelism; tiling

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Brodman, J. C. (2011). Data parallelism with hierarchically tiled objects. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/24316

Chicago Manual of Style (16th Edition):

Brodman, James C. “Data parallelism with hierarchically tiled objects.” 2011. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed March 07, 2021. http://hdl.handle.net/2142/24316.

MLA Handbook (7th Edition):

Brodman, James C. “Data parallelism with hierarchically tiled objects.” 2011. Web. 07 Mar 2021.

Vancouver:

Brodman JC. Data parallelism with hierarchically tiled objects. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2011. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2142/24316.

Council of Science Editors:

Brodman JC. Data parallelism with hierarchically tiled objects. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2011. Available from: http://hdl.handle.net/2142/24316


University of Georgia

2. Bhattiprolu, Srikanth. TAP : A tool for evaluating different processor assignments in task and data parallel programs.

Degree: 2014, University of Georgia

 A parallel program is usually written using either data parallelism or task parallelism. With data parallelism, each processor executes the same code but operates on… (more)

Subjects/Keywords: task parallelism; data parallelism; processor assignment

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bhattiprolu, S. (2014). TAP : A tool for evaluating different processor assignments in task and data parallel programs. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/20196

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Bhattiprolu, Srikanth. “TAP : A tool for evaluating different processor assignments in task and data parallel programs.” 2014. Thesis, University of Georgia. Accessed March 07, 2021. http://hdl.handle.net/10724/20196.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Bhattiprolu, Srikanth. “TAP : A tool for evaluating different processor assignments in task and data parallel programs.” 2014. Web. 07 Mar 2021.

Vancouver:

Bhattiprolu S. TAP : A tool for evaluating different processor assignments in task and data parallel programs. [Internet] [Thesis]. University of Georgia; 2014. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10724/20196.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bhattiprolu S. TAP : A tool for evaluating different processor assignments in task and data parallel programs. [Thesis]. University of Georgia; 2014. Available from: http://hdl.handle.net/10724/20196

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Rochester

3. Kelsey, Kirk M. Coarse-grained speculative parallelism and optimization.

Degree: PhD, 2011, University of Rochester

 The computing industry has long relied on computation becoming faster through steady exponential growth in the density of transistors on a chip. While the growth… (more)

Subjects/Keywords: Optimization; Parallelism; Speculation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kelsey, K. M. (2011). Coarse-grained speculative parallelism and optimization. (Doctoral Dissertation). University of Rochester. Retrieved from http://hdl.handle.net/1802/16955

Chicago Manual of Style (16th Edition):

Kelsey, Kirk M. “Coarse-grained speculative parallelism and optimization.” 2011. Doctoral Dissertation, University of Rochester. Accessed March 07, 2021. http://hdl.handle.net/1802/16955.

MLA Handbook (7th Edition):

Kelsey, Kirk M. “Coarse-grained speculative parallelism and optimization.” 2011. Web. 07 Mar 2021.

Vancouver:

Kelsey KM. Coarse-grained speculative parallelism and optimization. [Internet] [Doctoral dissertation]. University of Rochester; 2011. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1802/16955.

Council of Science Editors:

Kelsey KM. Coarse-grained speculative parallelism and optimization. [Doctoral Dissertation]. University of Rochester; 2011. Available from: http://hdl.handle.net/1802/16955


Texas A&M University

4. Fatehi, Ehsan. ILP and TLP in Shared Memory Applications: A Limit Study.

Degree: PhD, Computer Engineering, 2015, Texas A&M University

 The work in this dissertation explores the limits of Chip-multiprocessors (CMPs) with respect to shared-memory, multi-threaded benchmarks, which will help aid in identifying microarchitectural bottlenecks.… (more)

Subjects/Keywords: instruction-level parallelism; limits parallelism; concurrency pthreads; thread-level parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Fatehi, E. (2015). ILP and TLP in Shared Memory Applications: A Limit Study. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/155119

Chicago Manual of Style (16th Edition):

Fatehi, Ehsan. “ILP and TLP in Shared Memory Applications: A Limit Study.” 2015. Doctoral Dissertation, Texas A&M University. Accessed March 07, 2021. http://hdl.handle.net/1969.1/155119.

MLA Handbook (7th Edition):

Fatehi, Ehsan. “ILP and TLP in Shared Memory Applications: A Limit Study.” 2015. Web. 07 Mar 2021.

Vancouver:

Fatehi E. ILP and TLP in Shared Memory Applications: A Limit Study. [Internet] [Doctoral dissertation]. Texas A&M University; 2015. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1969.1/155119.

Council of Science Editors:

Fatehi E. ILP and TLP in Shared Memory Applications: A Limit Study. [Doctoral Dissertation]. Texas A&M University; 2015. Available from: http://hdl.handle.net/1969.1/155119


Cornell University

5. Tian, Yuan. A Parallel Implementation Of Hierarchical Belief Propagation.

Degree: M.S., Electrical Engineering, Electrical Engineering, 2013, Cornell University

 Though Belief Propagation (BP) algorithms generate high quality results for a wide range of Markov Random Field (MRF) formulated energy minimization problems, they require large… (more)

Subjects/Keywords: Belief Propagation; Graphical Models; Parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tian, Y. (2013). A Parallel Implementation Of Hierarchical Belief Propagation. (Masters Thesis). Cornell University. Retrieved from http://hdl.handle.net/1813/34099

Chicago Manual of Style (16th Edition):

Tian, Yuan. “A Parallel Implementation Of Hierarchical Belief Propagation.” 2013. Masters Thesis, Cornell University. Accessed March 07, 2021. http://hdl.handle.net/1813/34099.

MLA Handbook (7th Edition):

Tian, Yuan. “A Parallel Implementation Of Hierarchical Belief Propagation.” 2013. Web. 07 Mar 2021.

Vancouver:

Tian Y. A Parallel Implementation Of Hierarchical Belief Propagation. [Internet] [Masters thesis]. Cornell University; 2013. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1813/34099.

Council of Science Editors:

Tian Y. A Parallel Implementation Of Hierarchical Belief Propagation. [Masters Thesis]. Cornell University; 2013. Available from: http://hdl.handle.net/1813/34099


Penn State University

6. Yedlapalli, Praveen. A Study of Parallelism-locality Tradeoffs across Memory Hierarchy.

Degree: 2015, Penn State University

 As the number of cores on a chip increases, the memory bandwidth requirements become a scalability issue. Current CMPs incorporate multiple resources both on-chip and… (more)

Subjects/Keywords: Memory; SOC; parallelism; locality

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yedlapalli, P. (2015). A Study of Parallelism-locality Tradeoffs across Memory Hierarchy. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/26536

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Yedlapalli, Praveen. “A Study of Parallelism-locality Tradeoffs across Memory Hierarchy.” 2015. Thesis, Penn State University. Accessed March 07, 2021. https://submit-etda.libraries.psu.edu/catalog/26536.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Yedlapalli, Praveen. “A Study of Parallelism-locality Tradeoffs across Memory Hierarchy.” 2015. Web. 07 Mar 2021.

Vancouver:

Yedlapalli P. A Study of Parallelism-locality Tradeoffs across Memory Hierarchy. [Internet] [Thesis]. Penn State University; 2015. [cited 2021 Mar 07]. Available from: https://submit-etda.libraries.psu.edu/catalog/26536.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Yedlapalli P. A Study of Parallelism-locality Tradeoffs across Memory Hierarchy. [Thesis]. Penn State University; 2015. Available from: https://submit-etda.libraries.psu.edu/catalog/26536

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Manchester

7. Simsek, Osman Seckin. LEVERAGING DATA-FLOW INFORMATION FOR EFFICIENT SCHEDULING OF TASK-PARALLEL PROGRAMS ON HETEROGENEOUS SYSTEMS.

Degree: 2020, University of Manchester

 Writing efficient programs for heterogeneous platforms is challenging: programmers must deal with multiple programming models, partition work for CPUs and accelerators with different compute capabilities,… (more)

Subjects/Keywords: Task parallelism; Heterogeneous systems; Scheduling

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Simsek, O. S. (2020). LEVERAGING DATA-FLOW INFORMATION FOR EFFICIENT SCHEDULING OF TASK-PARALLEL PROGRAMS ON HETEROGENEOUS SYSTEMS. (Doctoral Dissertation). University of Manchester. Retrieved from http://www.manchester.ac.uk/escholar/uk-ac-man-scw:324388

Chicago Manual of Style (16th Edition):

Simsek, Osman Seckin. “LEVERAGING DATA-FLOW INFORMATION FOR EFFICIENT SCHEDULING OF TASK-PARALLEL PROGRAMS ON HETEROGENEOUS SYSTEMS.” 2020. Doctoral Dissertation, University of Manchester. Accessed March 07, 2021. http://www.manchester.ac.uk/escholar/uk-ac-man-scw:324388.

MLA Handbook (7th Edition):

Simsek, Osman Seckin. “LEVERAGING DATA-FLOW INFORMATION FOR EFFICIENT SCHEDULING OF TASK-PARALLEL PROGRAMS ON HETEROGENEOUS SYSTEMS.” 2020. Web. 07 Mar 2021.

Vancouver:

Simsek OS. LEVERAGING DATA-FLOW INFORMATION FOR EFFICIENT SCHEDULING OF TASK-PARALLEL PROGRAMS ON HETEROGENEOUS SYSTEMS. [Internet] [Doctoral dissertation]. University of Manchester; 2020. [cited 2021 Mar 07]. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:324388.

Council of Science Editors:

Simsek OS. LEVERAGING DATA-FLOW INFORMATION FOR EFFICIENT SCHEDULING OF TASK-PARALLEL PROGRAMS ON HETEROGENEOUS SYSTEMS. [Doctoral Dissertation]. University of Manchester; 2020. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:324388


Northeastern University

8. Momeni, Amir. Exploiting thread-level parallelism on reconfigurable architectures: a cross-layer approach.

Degree: PhD, Department of Electrical and Computer Engineering, 2017, Northeastern University

 Field Programmable Gate Arrays (FPGAs) are one major class of architectures commonly used in parallel computing systems. FPGAs provide a massive number (i.e., millions) programmable… (more)

Subjects/Keywords: FPGA; GPU; OpenCL; parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Momeni, A. (2017). Exploiting thread-level parallelism on reconfigurable architectures: a cross-layer approach. (Doctoral Dissertation). Northeastern University. Retrieved from http://hdl.handle.net/2047/D20254348

Chicago Manual of Style (16th Edition):

Momeni, Amir. “Exploiting thread-level parallelism on reconfigurable architectures: a cross-layer approach.” 2017. Doctoral Dissertation, Northeastern University. Accessed March 07, 2021. http://hdl.handle.net/2047/D20254348.

MLA Handbook (7th Edition):

Momeni, Amir. “Exploiting thread-level parallelism on reconfigurable architectures: a cross-layer approach.” 2017. Web. 07 Mar 2021.

Vancouver:

Momeni A. Exploiting thread-level parallelism on reconfigurable architectures: a cross-layer approach. [Internet] [Doctoral dissertation]. Northeastern University; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2047/D20254348.

Council of Science Editors:

Momeni A. Exploiting thread-level parallelism on reconfigurable architectures: a cross-layer approach. [Doctoral Dissertation]. Northeastern University; 2017. Available from: http://hdl.handle.net/2047/D20254348


Northeastern University

9. Huang, Kai. K-means parallelism on FPGA.

Degree: MS, Department of Electrical and Computer Engineering, 2018, Northeastern University

 The K-means algorithm, which partitions observations into different clusters, is often used for extremely large dataset analysis in data mining. A big issue with K-means… (more)

Subjects/Keywords: FPGA; K-means; parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Huang, K. (2018). K-means parallelism on FPGA. (Masters Thesis). Northeastern University. Retrieved from http://hdl.handle.net/2047/D20289338

Chicago Manual of Style (16th Edition):

Huang, Kai. “K-means parallelism on FPGA.” 2018. Masters Thesis, Northeastern University. Accessed March 07, 2021. http://hdl.handle.net/2047/D20289338.

MLA Handbook (7th Edition):

Huang, Kai. “K-means parallelism on FPGA.” 2018. Web. 07 Mar 2021.

Vancouver:

Huang K. K-means parallelism on FPGA. [Internet] [Masters thesis]. Northeastern University; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2047/D20289338.

Council of Science Editors:

Huang K. K-means parallelism on FPGA. [Masters Thesis]. Northeastern University; 2018. Available from: http://hdl.handle.net/2047/D20289338


University of Manitoba

10. Nemes, Jordan. Root parallelism in Invisalign® treatment.

Degree: Preventive Dental Science, 2015, University of Manitoba

 AIM: To assess root parallelism after Invisalign® treatment. MATERIALS AND METHODS: The sample consisted of 101 patients (mean age: 22.7 years, 29 males, 72 females)… (more)

Subjects/Keywords: Invisalign; Root; Parallelism; Orthodontic

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nemes, J. (2015). Root parallelism in Invisalign® treatment. (Masters Thesis). University of Manitoba. Retrieved from http://hdl.handle.net/1993/31259

Chicago Manual of Style (16th Edition):

Nemes, Jordan. “Root parallelism in Invisalign® treatment.” 2015. Masters Thesis, University of Manitoba. Accessed March 07, 2021. http://hdl.handle.net/1993/31259.

MLA Handbook (7th Edition):

Nemes, Jordan. “Root parallelism in Invisalign® treatment.” 2015. Web. 07 Mar 2021.

Vancouver:

Nemes J. Root parallelism in Invisalign® treatment. [Internet] [Masters thesis]. University of Manitoba; 2015. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1993/31259.

Council of Science Editors:

Nemes J. Root parallelism in Invisalign® treatment. [Masters Thesis]. University of Manitoba; 2015. Available from: http://hdl.handle.net/1993/31259

11. Powell, Daniel Christopher. Lightweight speculative support for aggressive auto-parallelisation tools.

Degree: PhD, 2015, University of Edinburgh

 With the recent move to multi-core architectures it has become important to create the means to exploit the performance made available to us by these… (more)

Subjects/Keywords: 005.2; parallelism; speculative execution

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Powell, D. C. (2015). Lightweight speculative support for aggressive auto-parallelisation tools. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/10566

Chicago Manual of Style (16th Edition):

Powell, Daniel Christopher. “Lightweight speculative support for aggressive auto-parallelisation tools.” 2015. Doctoral Dissertation, University of Edinburgh. Accessed March 07, 2021. http://hdl.handle.net/1842/10566.

MLA Handbook (7th Edition):

Powell, Daniel Christopher. “Lightweight speculative support for aggressive auto-parallelisation tools.” 2015. Web. 07 Mar 2021.

Vancouver:

Powell DC. Lightweight speculative support for aggressive auto-parallelisation tools. [Internet] [Doctoral dissertation]. University of Edinburgh; 2015. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1842/10566.

Council of Science Editors:

Powell DC. Lightweight speculative support for aggressive auto-parallelisation tools. [Doctoral Dissertation]. University of Edinburgh; 2015. Available from: http://hdl.handle.net/1842/10566


Uppsala University

12. Karlsson, Johan. Efficient use of Multi-core Technology in Interactive Desktop Applications.

Degree: Information Technology, 2015, Uppsala University

  The emergence of multi-core processors has successfully ended the era where applications could enjoy free and regular performance improvements without source code modifications. This… (more)

Subjects/Keywords: Multi-core processors; parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Karlsson, J. (2015). Efficient use of Multi-core Technology in Interactive Desktop Applications. (Thesis). Uppsala University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-246120

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Karlsson, Johan. “Efficient use of Multi-core Technology in Interactive Desktop Applications.” 2015. Thesis, Uppsala University. Accessed March 07, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-246120.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Karlsson, Johan. “Efficient use of Multi-core Technology in Interactive Desktop Applications.” 2015. Web. 07 Mar 2021.

Vancouver:

Karlsson J. Efficient use of Multi-core Technology in Interactive Desktop Applications. [Internet] [Thesis]. Uppsala University; 2015. [cited 2021 Mar 07]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-246120.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Karlsson J. Efficient use of Multi-core Technology in Interactive Desktop Applications. [Thesis]. Uppsala University; 2015. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-246120

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Delft University of Technology

13. Chi, Huang-Da (author). Parallelizing a Video Filter-chain for Multi- and Many-core Systems.

Degree: 2018, Delft University of Technology

 Developing parallel applications to make efficient use of current and emerging parallel architectures remains a big challenge in modern application development where performance is a… (more)

Subjects/Keywords: parallelism; filter; Video; scalability; Performance

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chi, H. (. (2018). Parallelizing a Video Filter-chain for Multi- and Many-core Systems. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:8f168240-026e-47ba-9cd8-4d3e657249aa

Chicago Manual of Style (16th Edition):

Chi, Huang-Da (author). “Parallelizing a Video Filter-chain for Multi- and Many-core Systems.” 2018. Masters Thesis, Delft University of Technology. Accessed March 07, 2021. http://resolver.tudelft.nl/uuid:8f168240-026e-47ba-9cd8-4d3e657249aa.

MLA Handbook (7th Edition):

Chi, Huang-Da (author). “Parallelizing a Video Filter-chain for Multi- and Many-core Systems.” 2018. Web. 07 Mar 2021.

Vancouver:

Chi H(. Parallelizing a Video Filter-chain for Multi- and Many-core Systems. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Mar 07]. Available from: http://resolver.tudelft.nl/uuid:8f168240-026e-47ba-9cd8-4d3e657249aa.

Council of Science Editors:

Chi H(. Parallelizing a Video Filter-chain for Multi- and Many-core Systems. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:8f168240-026e-47ba-9cd8-4d3e657249aa


University of Arizona

14. Gaska, Benjamin James. ParForPy: Loop Parallelism in Python .

Degree: 2017, University of Arizona

 Scientists are trending towards usage of high-level programming languages such as Python. The convenience of these languages often have a performance cost. As the amount… (more)

Subjects/Keywords: Parallelism; Programming Languages; Python

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gaska, B. J. (2017). ParForPy: Loop Parallelism in Python . (Masters Thesis). University of Arizona. Retrieved from http://hdl.handle.net/10150/625320

Chicago Manual of Style (16th Edition):

Gaska, Benjamin James. “ParForPy: Loop Parallelism in Python .” 2017. Masters Thesis, University of Arizona. Accessed March 07, 2021. http://hdl.handle.net/10150/625320.

MLA Handbook (7th Edition):

Gaska, Benjamin James. “ParForPy: Loop Parallelism in Python .” 2017. Web. 07 Mar 2021.

Vancouver:

Gaska BJ. ParForPy: Loop Parallelism in Python . [Internet] [Masters thesis]. University of Arizona; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10150/625320.

Council of Science Editors:

Gaska BJ. ParForPy: Loop Parallelism in Python . [Masters Thesis]. University of Arizona; 2017. Available from: http://hdl.handle.net/10150/625320


University of Waterloo

15. Pérez Gavilán Torres, Camila María. Performance of the Ultra-Wide Word Model.

Degree: 2017, University of Waterloo

 The Ultra-wide word model of computation (UWRAM) is an extension of the Word-RAM model which has an ALU that can operate on w2 bits at… (more)

Subjects/Keywords: uwram; algorithms; architecture; model; parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Pérez Gavilán Torres, C. M. (2017). Performance of the Ultra-Wide Word Model. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/12349

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Pérez Gavilán Torres, Camila María. “Performance of the Ultra-Wide Word Model.” 2017. Thesis, University of Waterloo. Accessed March 07, 2021. http://hdl.handle.net/10012/12349.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Pérez Gavilán Torres, Camila María. “Performance of the Ultra-Wide Word Model.” 2017. Web. 07 Mar 2021.

Vancouver:

Pérez Gavilán Torres CM. Performance of the Ultra-Wide Word Model. [Internet] [Thesis]. University of Waterloo; 2017. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10012/12349.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Pérez Gavilán Torres CM. Performance of the Ultra-Wide Word Model. [Thesis]. University of Waterloo; 2017. Available from: http://hdl.handle.net/10012/12349

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Manchester

16. Simsek, Osman. Leveraging data-flow information for efficient scheduling of task-parallel programs on heterogeneous systems.

Degree: PhD, 2020, University of Manchester

 Writing efficient programs for heterogeneous platforms is challenging: programmers must deal with multiple programming models, partition work for CPUs and accelerators with different compute capabilities,… (more)

Subjects/Keywords: Task parallelism; Heterogeneous systems; Scheduling

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Simsek, O. (2020). Leveraging data-flow information for efficient scheduling of task-parallel programs on heterogeneous systems. (Doctoral Dissertation). University of Manchester. Retrieved from https://www.research.manchester.ac.uk/portal/en/theses/leveraging-dataflow-information-for-efficient-scheduling-of-taskparallel-programs-on-heterogeneous-systems(cadeccf3-53f9-43e0-ae1b-77138d32e90e).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.804135

Chicago Manual of Style (16th Edition):

Simsek, Osman. “Leveraging data-flow information for efficient scheduling of task-parallel programs on heterogeneous systems.” 2020. Doctoral Dissertation, University of Manchester. Accessed March 07, 2021. https://www.research.manchester.ac.uk/portal/en/theses/leveraging-dataflow-information-for-efficient-scheduling-of-taskparallel-programs-on-heterogeneous-systems(cadeccf3-53f9-43e0-ae1b-77138d32e90e).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.804135.

MLA Handbook (7th Edition):

Simsek, Osman. “Leveraging data-flow information for efficient scheduling of task-parallel programs on heterogeneous systems.” 2020. Web. 07 Mar 2021.

Vancouver:

Simsek O. Leveraging data-flow information for efficient scheduling of task-parallel programs on heterogeneous systems. [Internet] [Doctoral dissertation]. University of Manchester; 2020. [cited 2021 Mar 07]. Available from: https://www.research.manchester.ac.uk/portal/en/theses/leveraging-dataflow-information-for-efficient-scheduling-of-taskparallel-programs-on-heterogeneous-systems(cadeccf3-53f9-43e0-ae1b-77138d32e90e).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.804135.

Council of Science Editors:

Simsek O. Leveraging data-flow information for efficient scheduling of task-parallel programs on heterogeneous systems. [Doctoral Dissertation]. University of Manchester; 2020. Available from: https://www.research.manchester.ac.uk/portal/en/theses/leveraging-dataflow-information-for-efficient-scheduling-of-taskparallel-programs-on-heterogeneous-systems(cadeccf3-53f9-43e0-ae1b-77138d32e90e).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.804135


Rutgers University

17. Yoga, Adarsh, 1985-. Parallelism-driven performance analysis techniques for task parallel programs.

Degree: PhD, Computer Science, 2019, Rutgers University

Performance analysis of parallel programs continues to be challenging for programmers. Programmers have to account for several factors to extract the best possible performance from… (more)

Subjects/Keywords: Parallelism; Parallel programs (Computer programs)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yoga, Adarsh, 1. (2019). Parallelism-driven performance analysis techniques for task parallel programs. (Doctoral Dissertation). Rutgers University. Retrieved from https://rucore.libraries.rutgers.edu/rutgers-lib/62070/

Chicago Manual of Style (16th Edition):

Yoga, Adarsh, 1985-. “Parallelism-driven performance analysis techniques for task parallel programs.” 2019. Doctoral Dissertation, Rutgers University. Accessed March 07, 2021. https://rucore.libraries.rutgers.edu/rutgers-lib/62070/.

MLA Handbook (7th Edition):

Yoga, Adarsh, 1985-. “Parallelism-driven performance analysis techniques for task parallel programs.” 2019. Web. 07 Mar 2021.

Vancouver:

Yoga, Adarsh 1. Parallelism-driven performance analysis techniques for task parallel programs. [Internet] [Doctoral dissertation]. Rutgers University; 2019. [cited 2021 Mar 07]. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/62070/.

Council of Science Editors:

Yoga, Adarsh 1. Parallelism-driven performance analysis techniques for task parallel programs. [Doctoral Dissertation]. Rutgers University; 2019. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/62070/


Penn State University

18. Lublinerman, Roberto Elias. Concurrent Assemblies: A Model for Concurrent Program Execution.

Degree: 2012, Penn State University

 We present Concurrent Assemblies, an abstract model for modeling shared memory parallel programs. In particular Concurrent Assemblies is targeted to irregular parallel applications such as… (more)

Subjects/Keywords: Parallel programming; Programming abstractions; Irregular parallelism; Data-parallelism; Ownership

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lublinerman, R. E. (2012). Concurrent Assemblies: A Model for Concurrent Program Execution. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/15407

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Lublinerman, Roberto Elias. “Concurrent Assemblies: A Model for Concurrent Program Execution.” 2012. Thesis, Penn State University. Accessed March 07, 2021. https://submit-etda.libraries.psu.edu/catalog/15407.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Lublinerman, Roberto Elias. “Concurrent Assemblies: A Model for Concurrent Program Execution.” 2012. Web. 07 Mar 2021.

Vancouver:

Lublinerman RE. Concurrent Assemblies: A Model for Concurrent Program Execution. [Internet] [Thesis]. Penn State University; 2012. [cited 2021 Mar 07]. Available from: https://submit-etda.libraries.psu.edu/catalog/15407.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Lublinerman RE. Concurrent Assemblies: A Model for Concurrent Program Execution. [Thesis]. Penn State University; 2012. Available from: https://submit-etda.libraries.psu.edu/catalog/15407

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Queensland University of Technology

19. Craik, Andrew. A framework for reasoning about inherent parallelism in modern object-oriented languages.

Degree: 2011, Queensland University of Technology

 With the emergence of multi-core processors into the mainstream, parallel programming is no longer the specialized domain it once was. There is a growing need… (more)

Subjects/Keywords: programming languages; Ownership Types; parallelization; inherent parallelism; conditional parallelism; effect system

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Craik, A. (2011). A framework for reasoning about inherent parallelism in modern object-oriented languages. (Thesis). Queensland University of Technology. Retrieved from https://eprints.qut.edu.au/40877/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Craik, Andrew. “A framework for reasoning about inherent parallelism in modern object-oriented languages.” 2011. Thesis, Queensland University of Technology. Accessed March 07, 2021. https://eprints.qut.edu.au/40877/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Craik, Andrew. “A framework for reasoning about inherent parallelism in modern object-oriented languages.” 2011. Web. 07 Mar 2021.

Vancouver:

Craik A. A framework for reasoning about inherent parallelism in modern object-oriented languages. [Internet] [Thesis]. Queensland University of Technology; 2011. [cited 2021 Mar 07]. Available from: https://eprints.qut.edu.au/40877/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Craik A. A framework for reasoning about inherent parallelism in modern object-oriented languages. [Thesis]. Queensland University of Technology; 2011. Available from: https://eprints.qut.edu.au/40877/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Universidade Nova

20. Delgado, Nuno Miguel de Brito. A system’s approach to cache hierarchy-aware decomposition of data-parallel computations.

Degree: 2014, Universidade Nova

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

The architecture of nowadays’ processors is very complex, comprising several computational cores and an intricate… (more)

Subjects/Keywords: Data-parallelism; Hierarchical parallelism; Domain decomposition; Runtime systems

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Delgado, N. M. d. B. (2014). A system’s approach to cache hierarchy-aware decomposition of data-parallel computations. (Thesis). Universidade Nova. Retrieved from http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/13014

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Delgado, Nuno Miguel de Brito. “A system’s approach to cache hierarchy-aware decomposition of data-parallel computations.” 2014. Thesis, Universidade Nova. Accessed March 07, 2021. http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/13014.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Delgado, Nuno Miguel de Brito. “A system’s approach to cache hierarchy-aware decomposition of data-parallel computations.” 2014. Web. 07 Mar 2021.

Vancouver:

Delgado NMdB. A system’s approach to cache hierarchy-aware decomposition of data-parallel computations. [Internet] [Thesis]. Universidade Nova; 2014. [cited 2021 Mar 07]. Available from: http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/13014.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Delgado NMdB. A system’s approach to cache hierarchy-aware decomposition of data-parallel computations. [Thesis]. Universidade Nova; 2014. Available from: http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/13014

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Maryland

21. Zuzak, Michael. Exploiting Nested Parallelism on Heterogeneous Processors.

Degree: Electrical Engineering, 2016, University of Maryland

 Heterogeneous computing systems have become common in modern processor architectures. These systems, such as those released by AMD, Intel, and Nvidia, include both CPU and… (more)

Subjects/Keywords: Computer engineering; Heterogeneous Execution Models; Heterogeneous Processors; Multigrain Parallelism; Nested Parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zuzak, M. (2016). Exploiting Nested Parallelism on Heterogeneous Processors. (Thesis). University of Maryland. Retrieved from http://hdl.handle.net/1903/18305

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Zuzak, Michael. “Exploiting Nested Parallelism on Heterogeneous Processors.” 2016. Thesis, University of Maryland. Accessed March 07, 2021. http://hdl.handle.net/1903/18305.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Zuzak, Michael. “Exploiting Nested Parallelism on Heterogeneous Processors.” 2016. Web. 07 Mar 2021.

Vancouver:

Zuzak M. Exploiting Nested Parallelism on Heterogeneous Processors. [Internet] [Thesis]. University of Maryland; 2016. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1903/18305.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Zuzak M. Exploiting Nested Parallelism on Heterogeneous Processors. [Thesis]. University of Maryland; 2016. Available from: http://hdl.handle.net/1903/18305

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Rochester

22. Lu, Li. Designing for the simple case in a parallel scripting language.

Degree: PhD, 2015, University of Rochester

 The fast development of parallel computer systems poses a challenge to programming language design and implementation. On the one hand, simple semantics are desirable; on… (more)

Subjects/Keywords: Parallel programming; scripting language; deterministic parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lu, L. (2015). Designing for the simple case in a parallel scripting language. (Doctoral Dissertation). University of Rochester. Retrieved from http://hdl.handle.net/1802/29300

Chicago Manual of Style (16th Edition):

Lu, Li. “Designing for the simple case in a parallel scripting language.” 2015. Doctoral Dissertation, University of Rochester. Accessed March 07, 2021. http://hdl.handle.net/1802/29300.

MLA Handbook (7th Edition):

Lu, Li. “Designing for the simple case in a parallel scripting language.” 2015. Web. 07 Mar 2021.

Vancouver:

Lu L. Designing for the simple case in a parallel scripting language. [Internet] [Doctoral dissertation]. University of Rochester; 2015. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1802/29300.

Council of Science Editors:

Lu L. Designing for the simple case in a parallel scripting language. [Doctoral Dissertation]. University of Rochester; 2015. Available from: http://hdl.handle.net/1802/29300


University of Minnesota

23. Kolpe, Tejaswini. Power management in multicore processors through clustered DVFS.

Degree: MS, Electrical Engineering, 2010, University of Minnesota

University of Minnesota M.S. thesis. July 2010. Major: Electrical Engineering. Advisor: Sachin Suresh Sapatnekar. 1 computer file (PDF); viii, 57 pages. Ill. (some col.)

The… (more)

Subjects/Keywords: Cores; Processors; Clusters; Parallelism; Electrical Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kolpe, T. (2010). Power management in multicore processors through clustered DVFS. (Masters Thesis). University of Minnesota. Retrieved from http://purl.umn.edu/93628

Chicago Manual of Style (16th Edition):

Kolpe, Tejaswini. “Power management in multicore processors through clustered DVFS.” 2010. Masters Thesis, University of Minnesota. Accessed March 07, 2021. http://purl.umn.edu/93628.

MLA Handbook (7th Edition):

Kolpe, Tejaswini. “Power management in multicore processors through clustered DVFS.” 2010. Web. 07 Mar 2021.

Vancouver:

Kolpe T. Power management in multicore processors through clustered DVFS. [Internet] [Masters thesis]. University of Minnesota; 2010. [cited 2021 Mar 07]. Available from: http://purl.umn.edu/93628.

Council of Science Editors:

Kolpe T. Power management in multicore processors through clustered DVFS. [Masters Thesis]. University of Minnesota; 2010. Available from: http://purl.umn.edu/93628


University of Michigan

24. Beaumont, Jonathan. Rethinking Context Management of Data Parallel Processors in an Era of Irregular Computing.

Degree: PhD, Computer Science & Engineering, 2019, University of Michigan

 Data parallel architectures such as general purpose GPUs and those using SIMD extensions have become increasingly prevalent in high performance computing due to their power… (more)

Subjects/Keywords: GPU Architecture; Irregular Parallelism; Computer Science; Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Beaumont, J. (2019). Rethinking Context Management of Data Parallel Processors in an Era of Irregular Computing. (Doctoral Dissertation). University of Michigan. Retrieved from http://hdl.handle.net/2027.42/153379

Chicago Manual of Style (16th Edition):

Beaumont, Jonathan. “Rethinking Context Management of Data Parallel Processors in an Era of Irregular Computing.” 2019. Doctoral Dissertation, University of Michigan. Accessed March 07, 2021. http://hdl.handle.net/2027.42/153379.

MLA Handbook (7th Edition):

Beaumont, Jonathan. “Rethinking Context Management of Data Parallel Processors in an Era of Irregular Computing.” 2019. Web. 07 Mar 2021.

Vancouver:

Beaumont J. Rethinking Context Management of Data Parallel Processors in an Era of Irregular Computing. [Internet] [Doctoral dissertation]. University of Michigan; 2019. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2027.42/153379.

Council of Science Editors:

Beaumont J. Rethinking Context Management of Data Parallel Processors in an Era of Irregular Computing. [Doctoral Dissertation]. University of Michigan; 2019. Available from: http://hdl.handle.net/2027.42/153379


University of Waterloo

25. Delisle, Thierry. Concurrency in C∀.

Degree: 2018, University of Waterloo

 C∀ is a modern, non-object-oriented extension of the C programming language. This thesis serves as a definition and an implementation for the concurrency and parallelism(more)

Subjects/Keywords: programming language; concurrency and parallelism; threading; C∀

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Delisle, T. (2018). Concurrency in C∀. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/12888

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Delisle, Thierry. “Concurrency in C∀.” 2018. Thesis, University of Waterloo. Accessed March 07, 2021. http://hdl.handle.net/10012/12888.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Delisle, Thierry. “Concurrency in C∀.” 2018. Web. 07 Mar 2021.

Vancouver:

Delisle T. Concurrency in C∀. [Internet] [Thesis]. University of Waterloo; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/10012/12888.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Delisle T. Concurrency in C∀. [Thesis]. University of Waterloo; 2018. Available from: http://hdl.handle.net/10012/12888

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

26. Srivastava, Pallavi. Exploring model parallelism in distributed scheduling of neural network frameworks.

Degree: MS, Computer Science, 2018, University of Illinois – Urbana-Champaign

 The growth in size and computational requirements in training Neural Networks (NN) over the past few years has led to an increase in their sizes.… (more)

Subjects/Keywords: model parallelism

…detail in chapter 2. Recent research has led to multiple developments in data parallelism [… …TensorFlow (TF) [8] [15]. While data parallelism is an effective way… …ever increasing size of the models themselves as they become more complex. Model parallelism… …data). Model parallelism can prove to be an indispensable tool for NN models which are… …preliminaries as well as provide background on Machine Learning systems and model parallelism. 2. In… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Srivastava, P. (2018). Exploring model parallelism in distributed scheduling of neural network frameworks. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/101625

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Srivastava, Pallavi. “Exploring model parallelism in distributed scheduling of neural network frameworks.” 2018. Thesis, University of Illinois – Urbana-Champaign. Accessed March 07, 2021. http://hdl.handle.net/2142/101625.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Srivastava, Pallavi. “Exploring model parallelism in distributed scheduling of neural network frameworks.” 2018. Web. 07 Mar 2021.

Vancouver:

Srivastava P. Exploring model parallelism in distributed scheduling of neural network frameworks. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2018. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/2142/101625.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Srivastava P. Exploring model parallelism in distributed scheduling of neural network frameworks. [Thesis]. University of Illinois – Urbana-Champaign; 2018. Available from: http://hdl.handle.net/2142/101625

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Colorado

27. Price, Graham David. Dynamic Trace Analysis with Zero-Suppressed BDDs.

Degree: PhD, Electrical, Computer & Energy Engineering, 2011, University of Colorado

  Instruction level parallelism (ILP) limitations have forced processor manufacturers to develop multi-core platforms with the expectation that programs will be able to exploit thread… (more)

Subjects/Keywords: Dynamic Program Analysis; Parallelism; Computer Sciences; Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Price, G. D. (2011). Dynamic Trace Analysis with Zero-Suppressed BDDs. (Doctoral Dissertation). University of Colorado. Retrieved from https://scholar.colorado.edu/ecen_gradetds/27

Chicago Manual of Style (16th Edition):

Price, Graham David. “Dynamic Trace Analysis with Zero-Suppressed BDDs.” 2011. Doctoral Dissertation, University of Colorado. Accessed March 07, 2021. https://scholar.colorado.edu/ecen_gradetds/27.

MLA Handbook (7th Edition):

Price, Graham David. “Dynamic Trace Analysis with Zero-Suppressed BDDs.” 2011. Web. 07 Mar 2021.

Vancouver:

Price GD. Dynamic Trace Analysis with Zero-Suppressed BDDs. [Internet] [Doctoral dissertation]. University of Colorado; 2011. [cited 2021 Mar 07]. Available from: https://scholar.colorado.edu/ecen_gradetds/27.

Council of Science Editors:

Price GD. Dynamic Trace Analysis with Zero-Suppressed BDDs. [Doctoral Dissertation]. University of Colorado; 2011. Available from: https://scholar.colorado.edu/ecen_gradetds/27

28. Johnell, Carl. Parallel programming in Go and Scala : A performance comparison.

Degree: 2015, , Department of Software Engineering

      This thesis provides a performance comparison of parallel programming in Go and Scala. Go supports concurrency through goroutines and channels. Scala have… (more)

Subjects/Keywords: Go; Scala; parallelism; concurrency; Software Engineering; Programvaruteknik

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Johnell, C. (2015). Parallel programming in Go and Scala : A performance comparison. (Thesis). , Department of Software Engineering. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:bth-996

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Johnell, Carl. “Parallel programming in Go and Scala : A performance comparison.” 2015. Thesis, , Department of Software Engineering. Accessed March 07, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-996.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Johnell, Carl. “Parallel programming in Go and Scala : A performance comparison.” 2015. Web. 07 Mar 2021.

Vancouver:

Johnell C. Parallel programming in Go and Scala : A performance comparison. [Internet] [Thesis]. , Department of Software Engineering; 2015. [cited 2021 Mar 07]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-996.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Johnell C. Parallel programming in Go and Scala : A performance comparison. [Thesis]. , Department of Software Engineering; 2015. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-996

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Rice University

29. Sbirlea, Dragos Dumitru. Memory and Communication Optimizations for Macro-dataflow Programs.

Degree: PhD, Engineering, 2015, Rice University

 It is now widely recognized that increased levels of parallelism are a necessary condition for improved application performance on multicore computers. However, the memory-per-core ratio… (more)

Subjects/Keywords: macro dataflow; inspector/executor; task parallelism

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Sbirlea, D. D. (2015). Memory and Communication Optimizations for Macro-dataflow Programs. (Doctoral Dissertation). Rice University. Retrieved from http://hdl.handle.net/1911/88143

Chicago Manual of Style (16th Edition):

Sbirlea, Dragos Dumitru. “Memory and Communication Optimizations for Macro-dataflow Programs.” 2015. Doctoral Dissertation, Rice University. Accessed March 07, 2021. http://hdl.handle.net/1911/88143.

MLA Handbook (7th Edition):

Sbirlea, Dragos Dumitru. “Memory and Communication Optimizations for Macro-dataflow Programs.” 2015. Web. 07 Mar 2021.

Vancouver:

Sbirlea DD. Memory and Communication Optimizations for Macro-dataflow Programs. [Internet] [Doctoral dissertation]. Rice University; 2015. [cited 2021 Mar 07]. Available from: http://hdl.handle.net/1911/88143.

Council of Science Editors:

Sbirlea DD. Memory and Communication Optimizations for Macro-dataflow Programs. [Doctoral Dissertation]. Rice University; 2015. Available from: http://hdl.handle.net/1911/88143


Princeton University

30. Oh, Taewook. Automatic Exploitation of Input Parallelism .

Degree: PhD, 2015, Princeton University

Parallelism may reside in the input of a program rather than the program itself. A script interpreter, for example, is hard to parallelize because its… (more)

Subjects/Keywords: Automatic Parallelization; Input Parallelism; Program Specialization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Oh, T. (2015). Automatic Exploitation of Input Parallelism . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp018336h4299

Chicago Manual of Style (16th Edition):

Oh, Taewook. “Automatic Exploitation of Input Parallelism .” 2015. Doctoral Dissertation, Princeton University. Accessed March 07, 2021. http://arks.princeton.edu/ark:/88435/dsp018336h4299.

MLA Handbook (7th Edition):

Oh, Taewook. “Automatic Exploitation of Input Parallelism .” 2015. Web. 07 Mar 2021.

Vancouver:

Oh T. Automatic Exploitation of Input Parallelism . [Internet] [Doctoral dissertation]. Princeton University; 2015. [cited 2021 Mar 07]. Available from: http://arks.princeton.edu/ark:/88435/dsp018336h4299.

Council of Science Editors:

Oh T. Automatic Exploitation of Input Parallelism . [Doctoral Dissertation]. Princeton University; 2015. Available from: http://arks.princeton.edu/ark:/88435/dsp018336h4299

[1] [2] [3] [4] [5] … [16]

.