Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

Dept: Electrical and Computer Engineering

You searched for subject:(parallel programming). Showing records 1 – 14 of 14 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


Oregon State University

1. Yadav, Anil Kumar. Performance monitoring of parallel applications at large grain level.

Degree: MS, Electrical and Computer Engineering, 1989, Oregon State University

 This thesis is an attempt to create a methodology to analyze the performance of parallel applications on a wide variety of platforms and programming environments.… (more)

Subjects/Keywords: Parallel programming (Computer science)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yadav, A. K. (1989). Performance monitoring of parallel applications at large grain level. (Masters Thesis). Oregon State University. Retrieved from http://hdl.handle.net/1957/38780

Chicago Manual of Style (16th Edition):

Yadav, Anil Kumar. “Performance monitoring of parallel applications at large grain level.” 1989. Masters Thesis, Oregon State University. Accessed November 13, 2019. http://hdl.handle.net/1957/38780.

MLA Handbook (7th Edition):

Yadav, Anil Kumar. “Performance monitoring of parallel applications at large grain level.” 1989. Web. 13 Nov 2019.

Vancouver:

Yadav AK. Performance monitoring of parallel applications at large grain level. [Internet] [Masters thesis]. Oregon State University; 1989. [cited 2019 Nov 13]. Available from: http://hdl.handle.net/1957/38780.

Council of Science Editors:

Yadav AK. Performance monitoring of parallel applications at large grain level. [Masters Thesis]. Oregon State University; 1989. Available from: http://hdl.handle.net/1957/38780


University of Florida

2. Su, Hung. Parallel Performance Wizard - Framework and Techniques for Parallel Application Optimization.

Degree: PhD, Electrical and Computer Engineering, 2010, University of Florida

 Developing a high-performance parallel application is difficult. Given the complexity of high-performance parallel programs, developers often must rely on performance analysis tools to help them… (more)

Subjects/Keywords: Compilers; Computer programming; Data collection; Data processing; Distance functions; Instrumentation; Modeling; Parallel programming; Programming models; Scalability; analysis, automatic, bottleneck, framework, mpi, optimization, parallel, performance, pgas, ppw, shmem, tool, upc

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Su, H. (2010). Parallel Performance Wizard - Framework and Techniques for Parallel Application Optimization. (Doctoral Dissertation). University of Florida. Retrieved from http://ufdc.ufl.edu/UFE0042186

Chicago Manual of Style (16th Edition):

Su, Hung. “Parallel Performance Wizard - Framework and Techniques for Parallel Application Optimization.” 2010. Doctoral Dissertation, University of Florida. Accessed November 13, 2019. http://ufdc.ufl.edu/UFE0042186.

MLA Handbook (7th Edition):

Su, Hung. “Parallel Performance Wizard - Framework and Techniques for Parallel Application Optimization.” 2010. Web. 13 Nov 2019.

Vancouver:

Su H. Parallel Performance Wizard - Framework and Techniques for Parallel Application Optimization. [Internet] [Doctoral dissertation]. University of Florida; 2010. [cited 2019 Nov 13]. Available from: http://ufdc.ufl.edu/UFE0042186.

Council of Science Editors:

Su H. Parallel Performance Wizard - Framework and Techniques for Parallel Application Optimization. [Doctoral Dissertation]. University of Florida; 2010. Available from: http://ufdc.ufl.edu/UFE0042186


Mississippi State University

3. Wang, Chunheng. Parallel computing applications in large-scale power system operations.

Degree: PhD, Electrical and Computer Engineering, 2016, Mississippi State University

  Electrical energy is the basic necessity for the economic development of human societies. In recent decades, the electricity industry is undergoing enormous changes, which… (more)

Subjects/Keywords: unit commitment; stochastic programming; power transmission switching; power system optimization; parallel algorithm

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wang, C. (2016). Parallel computing applications in large-scale power system operations. (Doctoral Dissertation). Mississippi State University. Retrieved from http://sun.library.msstate.edu/ETD-db/theses/available/etd-06262016-163413/ ;

Chicago Manual of Style (16th Edition):

Wang, Chunheng. “Parallel computing applications in large-scale power system operations.” 2016. Doctoral Dissertation, Mississippi State University. Accessed November 13, 2019. http://sun.library.msstate.edu/ETD-db/theses/available/etd-06262016-163413/ ;.

MLA Handbook (7th Edition):

Wang, Chunheng. “Parallel computing applications in large-scale power system operations.” 2016. Web. 13 Nov 2019.

Vancouver:

Wang C. Parallel computing applications in large-scale power system operations. [Internet] [Doctoral dissertation]. Mississippi State University; 2016. [cited 2019 Nov 13]. Available from: http://sun.library.msstate.edu/ETD-db/theses/available/etd-06262016-163413/ ;.

Council of Science Editors:

Wang C. Parallel computing applications in large-scale power system operations. [Doctoral Dissertation]. Mississippi State University; 2016. Available from: http://sun.library.msstate.edu/ETD-db/theses/available/etd-06262016-163413/ ;


Purdue University

4. Liu, Chenyang. Improving programmability and performance for scientific applications.

Degree: PhD, Electrical and Computer Engineering, 2016, Purdue University

  With modern advancements in hardware and software technology scaling towards new limits, our compute machines are reaching new potentials to tackle more challenging problems.… (more)

Subjects/Keywords: Applied sciences; Algorithm design; Concurrent collections; Parallel programming; Task parallelism; Tiling; Computer Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liu, C. (2016). Improving programmability and performance for scientific applications. (Doctoral Dissertation). Purdue University. Retrieved from https://docs.lib.purdue.edu/open_access_dissertations/967

Chicago Manual of Style (16th Edition):

Liu, Chenyang. “Improving programmability and performance for scientific applications.” 2016. Doctoral Dissertation, Purdue University. Accessed November 13, 2019. https://docs.lib.purdue.edu/open_access_dissertations/967.

MLA Handbook (7th Edition):

Liu, Chenyang. “Improving programmability and performance for scientific applications.” 2016. Web. 13 Nov 2019.

Vancouver:

Liu C. Improving programmability and performance for scientific applications. [Internet] [Doctoral dissertation]. Purdue University; 2016. [cited 2019 Nov 13]. Available from: https://docs.lib.purdue.edu/open_access_dissertations/967.

Council of Science Editors:

Liu C. Improving programmability and performance for scientific applications. [Doctoral Dissertation]. Purdue University; 2016. Available from: https://docs.lib.purdue.edu/open_access_dissertations/967


University of Florida

5. AGGARWAL,VIKAS. SHMEM+ and SCF System-Level Programming Models for Scalable Reconfigurable Computing.

Degree: PhD, Electrical and Computer Engineering, 2011, University of Florida

 Heterogeneous computing systems comprised of FPGAs coupled with standard microprocessors are becoming an increasingly popular solution for building future high-performance computing (HPC) and high-performance embedded… (more)

Subjects/Keywords: Application programming interfaces; Architectural models; Bandwidth; Communication models; Communication systems; Data transmission; Libraries; Multilevel models; Productivity; Programming models; FPGA  – HETEROGENEOUS  – MESSAGE  – MODEL  – PARALLEL  – PASSING  – PGAS  – PORTABILITY  – PRODUCTIVITY  – PROGRAMMING  – RECONFIGURABLE

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

AGGARWAL,VIKAS. (2011). SHMEM+ and SCF System-Level Programming Models for Scalable Reconfigurable Computing. (Doctoral Dissertation). University of Florida. Retrieved from http://ufdc.ufl.edu/UFE0042821

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

AGGARWAL,VIKAS. “SHMEM+ and SCF System-Level Programming Models for Scalable Reconfigurable Computing.” 2011. Doctoral Dissertation, University of Florida. Accessed November 13, 2019. http://ufdc.ufl.edu/UFE0042821.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

AGGARWAL,VIKAS. “SHMEM+ and SCF System-Level Programming Models for Scalable Reconfigurable Computing.” 2011. Web. 13 Nov 2019.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

AGGARWAL,VIKAS. SHMEM+ and SCF System-Level Programming Models for Scalable Reconfigurable Computing. [Internet] [Doctoral dissertation]. University of Florida; 2011. [cited 2019 Nov 13]. Available from: http://ufdc.ufl.edu/UFE0042821.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

AGGARWAL,VIKAS. SHMEM+ and SCF System-Level Programming Models for Scalable Reconfigurable Computing. [Doctoral Dissertation]. University of Florida; 2011. Available from: http://ufdc.ufl.edu/UFE0042821

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


Virginia Tech

6. Lee, Joo Hong. Hybrid Parallel Computing Strategies for Scientific Computing Applications.

Degree: PhD, Electrical and Computer Engineering, 2012, Virginia Tech

 Multi-core, multi-processor, and Graphics Processing Unit (GPU) computer architectures pose significant challenges with respect to the efficient exploitation of parallelism for large-scale, scientific computing simulations.… (more)

Subjects/Keywords: Pthreads; Parallel Programming; GPU Acceleration; Scientific Computing; Biological Systems Simulation; Hybrid Algorithms; Parallel Monte Carlo Algorithms; OpenMP; Hybrid Computing; Radiative Heat Transfer; Multiprocessor; Multi-threaded Software Performance

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lee, J. H. (2012). Hybrid Parallel Computing Strategies for Scientific Computing Applications. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/28882

Chicago Manual of Style (16th Edition):

Lee, Joo Hong. “Hybrid Parallel Computing Strategies for Scientific Computing Applications.” 2012. Doctoral Dissertation, Virginia Tech. Accessed November 13, 2019. http://hdl.handle.net/10919/28882.

MLA Handbook (7th Edition):

Lee, Joo Hong. “Hybrid Parallel Computing Strategies for Scientific Computing Applications.” 2012. Web. 13 Nov 2019.

Vancouver:

Lee JH. Hybrid Parallel Computing Strategies for Scientific Computing Applications. [Internet] [Doctoral dissertation]. Virginia Tech; 2012. [cited 2019 Nov 13]. Available from: http://hdl.handle.net/10919/28882.

Council of Science Editors:

Lee JH. Hybrid Parallel Computing Strategies for Scientific Computing Applications. [Doctoral Dissertation]. Virginia Tech; 2012. Available from: http://hdl.handle.net/10919/28882


Portland State University

7. Jothi, Komal. Dynamic Task Prediction for an SpMT Architecture Based on Control Independence.

Degree: MS(M.S.) in Electrical and Computer Engineering, Electrical and Computer Engineering, 2009, Portland State University

  Exploiting better performance from computer programs translates to finding more instructions to execute in parallel. Since most general purpose programs are written in an… (more)

Subjects/Keywords: Computer architecture; Parallel programming (Computer science); Microprocessors  – Programming; Simultaneous multithreading processors; Microprocessors  – Design and construction; Threads (Computer programs); Computer and Systems Architecture; Electrical and Computer Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jothi, K. (2009). Dynamic Task Prediction for an SpMT Architecture Based on Control Independence. (Masters Thesis). Portland State University. Retrieved from https://pdxscholar.library.pdx.edu/open_access_etds/1707

Chicago Manual of Style (16th Edition):

Jothi, Komal. “Dynamic Task Prediction for an SpMT Architecture Based on Control Independence.” 2009. Masters Thesis, Portland State University. Accessed November 13, 2019. https://pdxscholar.library.pdx.edu/open_access_etds/1707.

MLA Handbook (7th Edition):

Jothi, Komal. “Dynamic Task Prediction for an SpMT Architecture Based on Control Independence.” 2009. Web. 13 Nov 2019.

Vancouver:

Jothi K. Dynamic Task Prediction for an SpMT Architecture Based on Control Independence. [Internet] [Masters thesis]. Portland State University; 2009. [cited 2019 Nov 13]. Available from: https://pdxscholar.library.pdx.edu/open_access_etds/1707.

Council of Science Editors:

Jothi K. Dynamic Task Prediction for an SpMT Architecture Based on Control Independence. [Masters Thesis]. Portland State University; 2009. Available from: https://pdxscholar.library.pdx.edu/open_access_etds/1707


Georgia Tech

8. Naik, Aniket Dilip. Efficient Conditional Synchronization for Transactional Memory Based System.

Degree: MS, Electrical and Computer Engineering, 2006, Georgia Tech

 Multi-threaded applications are needed to realize the full potential of new chip-multi-threaded machines. Such applications are very difficult to program and orchestrate correctly, and transactional… (more)

Subjects/Keywords: Parallel programming; Transactional memory; Threads (Computer programs); Transaction systems (Computer systems); Parallel programming (Computer science); Synchronization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Naik, A. D. (2006). Efficient Conditional Synchronization for Transactional Memory Based System. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/10517

Chicago Manual of Style (16th Edition):

Naik, Aniket Dilip. “Efficient Conditional Synchronization for Transactional Memory Based System.” 2006. Masters Thesis, Georgia Tech. Accessed November 13, 2019. http://hdl.handle.net/1853/10517.

MLA Handbook (7th Edition):

Naik, Aniket Dilip. “Efficient Conditional Synchronization for Transactional Memory Based System.” 2006. Web. 13 Nov 2019.

Vancouver:

Naik AD. Efficient Conditional Synchronization for Transactional Memory Based System. [Internet] [Masters thesis]. Georgia Tech; 2006. [cited 2019 Nov 13]. Available from: http://hdl.handle.net/1853/10517.

Council of Science Editors:

Naik AD. Efficient Conditional Synchronization for Transactional Memory Based System. [Masters Thesis]. Georgia Tech; 2006. Available from: http://hdl.handle.net/1853/10517


Virginia Tech

9. Hickman, Joseph. An Analysis of an Interrupt-Driven Implementation of the Master-Worker Model with Application-Specific Coprocessors.

Degree: MS, Electrical and Computer Engineering, 2007, Virginia Tech

 In this thesis, we present a versatile parallel programming model composed of an individual general-purpose processor aided by several application-specific coprocessors. These computing units operate… (more)

Subjects/Keywords: master-worker model; application-specific coprocessor; parallel programming; heterogeneous architecture; FPGA

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hickman, J. (2007). An Analysis of an Interrupt-Driven Implementation of the Master-Worker Model with Application-Specific Coprocessors. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/35826

Chicago Manual of Style (16th Edition):

Hickman, Joseph. “An Analysis of an Interrupt-Driven Implementation of the Master-Worker Model with Application-Specific Coprocessors.” 2007. Masters Thesis, Virginia Tech. Accessed November 13, 2019. http://hdl.handle.net/10919/35826.

MLA Handbook (7th Edition):

Hickman, Joseph. “An Analysis of an Interrupt-Driven Implementation of the Master-Worker Model with Application-Specific Coprocessors.” 2007. Web. 13 Nov 2019.

Vancouver:

Hickman J. An Analysis of an Interrupt-Driven Implementation of the Master-Worker Model with Application-Specific Coprocessors. [Internet] [Masters thesis]. Virginia Tech; 2007. [cited 2019 Nov 13]. Available from: http://hdl.handle.net/10919/35826.

Council of Science Editors:

Hickman J. An Analysis of an Interrupt-Driven Implementation of the Master-Worker Model with Application-Specific Coprocessors. [Masters Thesis]. Virginia Tech; 2007. Available from: http://hdl.handle.net/10919/35826


New Jersey Institute of Technology

10. Jin, Dejiang. A versatile programming model for dynamic task scheduling on cluster computers.

Degree: PhD, Electrical and Computer Engineering, 2005, New Jersey Institute of Technology

  This dissertation studies the development of application programs for parallel and distributed computer systems, especially PC clusters. A methodology is proposed to increase the… (more)

Subjects/Keywords: Programming model; Cluster; parallel; Distributed; Scalability; Load balancing; Computer Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jin, D. (2005). A versatile programming model for dynamic task scheduling on cluster computers. (Doctoral Dissertation). New Jersey Institute of Technology. Retrieved from https://digitalcommons.njit.edu/dissertations/699

Chicago Manual of Style (16th Edition):

Jin, Dejiang. “A versatile programming model for dynamic task scheduling on cluster computers.” 2005. Doctoral Dissertation, New Jersey Institute of Technology. Accessed November 13, 2019. https://digitalcommons.njit.edu/dissertations/699.

MLA Handbook (7th Edition):

Jin, Dejiang. “A versatile programming model for dynamic task scheduling on cluster computers.” 2005. Web. 13 Nov 2019.

Vancouver:

Jin D. A versatile programming model for dynamic task scheduling on cluster computers. [Internet] [Doctoral dissertation]. New Jersey Institute of Technology; 2005. [cited 2019 Nov 13]. Available from: https://digitalcommons.njit.edu/dissertations/699.

Council of Science Editors:

Jin D. A versatile programming model for dynamic task scheduling on cluster computers. [Doctoral Dissertation]. New Jersey Institute of Technology; 2005. Available from: https://digitalcommons.njit.edu/dissertations/699


Georgia Tech

11. Yoo, Richard M. Adaptive transaction scheduling for transactional memory systems.

Degree: MS, Electrical and Computer Engineering, 2008, Georgia Tech

 Transactional memory systems are expected to enable parallel programming at lower programming complexity, while delivering improved performance over traditional lock-based systems. Nonetheless, there are certain… (more)

Subjects/Keywords: Parallelism; Performance; Transaction effectiveness; Contention intensity; Transaction systems (Computer systems); Threads (Computer programs); Parallel programming (Computer science); Synchronization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yoo, R. M. (2008). Adaptive transaction scheduling for transactional memory systems. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/22587

Chicago Manual of Style (16th Edition):

Yoo, Richard M. “Adaptive transaction scheduling for transactional memory systems.” 2008. Masters Thesis, Georgia Tech. Accessed November 13, 2019. http://hdl.handle.net/1853/22587.

MLA Handbook (7th Edition):

Yoo, Richard M. “Adaptive transaction scheduling for transactional memory systems.” 2008. Web. 13 Nov 2019.

Vancouver:

Yoo RM. Adaptive transaction scheduling for transactional memory systems. [Internet] [Masters thesis]. Georgia Tech; 2008. [cited 2019 Nov 13]. Available from: http://hdl.handle.net/1853/22587.

Council of Science Editors:

Yoo RM. Adaptive transaction scheduling for transactional memory systems. [Masters Thesis]. Georgia Tech; 2008. Available from: http://hdl.handle.net/1853/22587

12. He, Zhengyu. On algorithm design and programming model for multi-threaded computing.

Degree: PhD, Electrical and Computer Engineering, 2012, Georgia Tech

 The objective of this work is to investigate the algorithm design and the programming model of multi-threaded computing. Designing multi-threaded algorithms is very challenging -… (more)

Subjects/Keywords: Programming model; Parallel computing; Transactional memory; Maximum flow; Computer science; Computer algorithms; Algorithms; Multiprocessors

…objective of this work is to investigate the algorithm design and the programming model of multi… …both algorithm design and programming models for multi-threaded systems. In this dissertation… …barriers from its original parallel version. Experiment results show that such algorithmic… …the algorithm design aspect and the programming model aspect for multi-thread computing. New… …many-core platform. At the same time, we still need to improve the current programming model… 

Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

He, Z. (2012). On algorithm design and programming model for multi-threaded computing. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/43635

Chicago Manual of Style (16th Edition):

He, Zhengyu. “On algorithm design and programming model for multi-threaded computing.” 2012. Doctoral Dissertation, Georgia Tech. Accessed November 13, 2019. http://hdl.handle.net/1853/43635.

MLA Handbook (7th Edition):

He, Zhengyu. “On algorithm design and programming model for multi-threaded computing.” 2012. Web. 13 Nov 2019.

Vancouver:

He Z. On algorithm design and programming model for multi-threaded computing. [Internet] [Doctoral dissertation]. Georgia Tech; 2012. [cited 2019 Nov 13]. Available from: http://hdl.handle.net/1853/43635.

Council of Science Editors:

He Z. On algorithm design and programming model for multi-threaded computing. [Doctoral Dissertation]. Georgia Tech; 2012. Available from: http://hdl.handle.net/1853/43635


Portland State University

13. Patwardhan, Chintamani M. Ignoring Interprocessor Communication During Scheduling.

Degree: MS(M.S.) in Electrical and Computer Engineering, Electrical and Computer Engineering, 1992, Portland State University

  The goal of parallel processing is to achieve high speed computing by partitioning a program into concurrent parts, assigning them in an efficient way… (more)

Subjects/Keywords: Parallel programming (Computer science)  – Mathematical models; Computer algorithms; Computer Engineering; Electrical and Computer Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Patwardhan, C. M. (1992). Ignoring Interprocessor Communication During Scheduling. (Masters Thesis). Portland State University. Retrieved from https://pdxscholar.library.pdx.edu/open_access_etds/4422

Chicago Manual of Style (16th Edition):

Patwardhan, Chintamani M. “Ignoring Interprocessor Communication During Scheduling.” 1992. Masters Thesis, Portland State University. Accessed November 13, 2019. https://pdxscholar.library.pdx.edu/open_access_etds/4422.

MLA Handbook (7th Edition):

Patwardhan, Chintamani M. “Ignoring Interprocessor Communication During Scheduling.” 1992. Web. 13 Nov 2019.

Vancouver:

Patwardhan CM. Ignoring Interprocessor Communication During Scheduling. [Internet] [Masters thesis]. Portland State University; 1992. [cited 2019 Nov 13]. Available from: https://pdxscholar.library.pdx.edu/open_access_etds/4422.

Council of Science Editors:

Patwardhan CM. Ignoring Interprocessor Communication During Scheduling. [Masters Thesis]. Portland State University; 1992. Available from: https://pdxscholar.library.pdx.edu/open_access_etds/4422


Colorado State University

14. Briceno Guerrero, Luis Diego. Resource allocation for heterogeneous computing systems : performance criteria, robustness measures, optimization heuristics, and properties.

Degree: PhD, Electrical and Computer Engineering, 2007, Colorado State University

 Heterogeneous computing (HC) is the coordinated use of different types of machines, networks, and interfaces to maximize the combined performance and/or cost effectiveness of the… (more)

Subjects/Keywords: heterogeneous computing; robustness; resource allocation; heuristics; Heterogeneous computing; Parallel processing (Electronic computers); Heuristic programming; Computing algorithms

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Briceno Guerrero, L. D. (2007). Resource allocation for heterogeneous computing systems : performance criteria, robustness measures, optimization heuristics, and properties. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/40277

Chicago Manual of Style (16th Edition):

Briceno Guerrero, Luis Diego. “Resource allocation for heterogeneous computing systems : performance criteria, robustness measures, optimization heuristics, and properties.” 2007. Doctoral Dissertation, Colorado State University. Accessed November 13, 2019. http://hdl.handle.net/10217/40277.

MLA Handbook (7th Edition):

Briceno Guerrero, Luis Diego. “Resource allocation for heterogeneous computing systems : performance criteria, robustness measures, optimization heuristics, and properties.” 2007. Web. 13 Nov 2019.

Vancouver:

Briceno Guerrero LD. Resource allocation for heterogeneous computing systems : performance criteria, robustness measures, optimization heuristics, and properties. [Internet] [Doctoral dissertation]. Colorado State University; 2007. [cited 2019 Nov 13]. Available from: http://hdl.handle.net/10217/40277.

Council of Science Editors:

Briceno Guerrero LD. Resource allocation for heterogeneous computing systems : performance criteria, robustness measures, optimization heuristics, and properties. [Doctoral Dissertation]. Colorado State University; 2007. Available from: http://hdl.handle.net/10217/40277

.