Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for +publisher:"University of Arizona" +contributor:("Strout, Michelle"). Showing records 1 – 3 of 3 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


University of Arizona

1. Gaska, Benjamin James. ParForPy: Loop Parallelism in Python .

Degree: 2017, University of Arizona

Scientists are trending towards usage of high-level programming languages such as Python. The convenience of these languages often have a performance cost. As the amount of data being processed increases this can make using these languages unfeasible. Parallelism is a means to achieve better performance, but many users are unaware of it, or find it difficult to work with. This thesis presents ParForPy, a means for loop-parallelization to to simplify usage of parallelism in Python for users. Discussion is included for determining when parallelism matches well with the problem. Results are given that indicate that ParForPy is both capable of improving program execution time and perceived to be a simpler construct to understand than other techniques for parallelism in Python. Advisors/Committee Members: Strout, Michelle (advisor), Surdeanu, Mihai (advisor), Strout, Michelle (committeemember), Surdeanu, Mihai (committeemember), Debray, Saumya (committeemember).

Subjects/Keywords: Parallelism; Programming Languages; Python

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gaska, B. J. (2017). ParForPy: Loop Parallelism in Python . (Masters Thesis). University of Arizona. Retrieved from http://hdl.handle.net/10150/625320

Chicago Manual of Style (16th Edition):

Gaska, Benjamin James. “ParForPy: Loop Parallelism in Python .” 2017. Masters Thesis, University of Arizona. Accessed December 12, 2019. http://hdl.handle.net/10150/625320.

MLA Handbook (7th Edition):

Gaska, Benjamin James. “ParForPy: Loop Parallelism in Python .” 2017. Web. 12 Dec 2019.

Vancouver:

Gaska BJ. ParForPy: Loop Parallelism in Python . [Internet] [Masters thesis]. University of Arizona; 2017. [cited 2019 Dec 12]. Available from: http://hdl.handle.net/10150/625320.

Council of Science Editors:

Gaska BJ. ParForPy: Loop Parallelism in Python . [Masters Thesis]. University of Arizona; 2017. Available from: http://hdl.handle.net/10150/625320


University of Arizona

2. Stephens, Jon. Enabling Specialization for Dynamic Programming Languages .

Degree: 2018, University of Arizona

Scientists across many diverse fields, including medicine, astronomy and biology, often program to aid in the analysis of large datasets. Many of them prototype in dynamic programming languages due to their perceived convenience. While this may shorten the development time, the chosen language is often interpreted and therefore incurs a high runtime overhead, reducing scalability. Program specialization presents a promising method of decreasing the overhead without inconveniencing the user, but prior work cannot generically specialize interpreters. In this thesis, we take steps toward generic interpreter specialization by generically identifying specializable inputs. We do so by taking a checkpoint of the interpreter immediately before it begins executing the script. This captures much more state for specialization than prior work which should improve specialization's effectiveness. In addition, we show that checkpoints are practical and speculate on how specialization can improve interpreter performance. Advisors/Committee Members: Debray, Saumya (advisor), Collberg, Christian (committeemember), Isaacs, Kate (committeemember), Strout, Michelle (committeemember), Debray, Saumya (committeemember), Proebsting, Todd (committeemember).

Subjects/Keywords: Checkpointing; Dynamic Programming Languages; Interpreters; Specialization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Stephens, J. (2018). Enabling Specialization for Dynamic Programming Languages . (Masters Thesis). University of Arizona. Retrieved from http://hdl.handle.net/10150/630143

Chicago Manual of Style (16th Edition):

Stephens, Jon. “Enabling Specialization for Dynamic Programming Languages .” 2018. Masters Thesis, University of Arizona. Accessed December 12, 2019. http://hdl.handle.net/10150/630143.

MLA Handbook (7th Edition):

Stephens, Jon. “Enabling Specialization for Dynamic Programming Languages .” 2018. Web. 12 Dec 2019.

Vancouver:

Stephens J. Enabling Specialization for Dynamic Programming Languages . [Internet] [Masters thesis]. University of Arizona; 2018. [cited 2019 Dec 12]. Available from: http://hdl.handle.net/10150/630143.

Council of Science Editors:

Stephens J. Enabling Specialization for Dynamic Programming Languages . [Masters Thesis]. University of Arizona; 2018. Available from: http://hdl.handle.net/10150/630143


University of Arizona

3. Savoie, Lee. Inter-Job Optimization in High Performance Computing .

Degree: 2019, University of Arizona

Future high performance computing (HPC) systems will face unique problems, including high power consumption and severe network contention. Both power and the network are shared resources; while individual jobs can optimize their use of these resources, we will realize greater benefits if we optimize them across all running jobs. Accordingly, this dissertation presents inter-job optimization strategies to limit power consumption and to mitigate network contention. One way to reduce HPC power consumption is to enforce a fixed power limit for running jobs. However, HPC applications do not consume constant power over their lifetimes. Thus, applications that are assigned a fixed power bound may be forced to slow down during high-power computation phases, but may not consume their full power allocation during low-power I/O phases. This dissertation explores algorithms that leverage application characteristics—phase frequency, duration and power needs—to shift unused power from applications in I/O phases to applications in computation phases, thus improving system-wide performance. We design novel techniques that include explicit staggering of applications to improve power shifting. Compared to executing without power shifting, our algorithms can improve average performance by up to 8% or improve performance of a single, high-priority application by up to 32%. We also investigate the use of Quality of Service (QoS) mechanisms to reduce the negative impact of network contention. QoS allows users to manage resource sharing between network flows and to provide bandwidth guarantees to specific flows. Our results show that applying QoS at the job level significantly reduces the impact of contention on high priority jobs, but it degrades the performance of other jobs and reduces overall throughput. However, applying QoS at the process level improves performance for specific jobs up to 40%, and in some cases it completely eliminates the impact of contention. It achieves these improvements with limited negative impact on other jobs; any job that experiences performance loss typically degrades less than 5%, often much less. The inter-job optimizations presented in this dissertation improve power and network management on HPC systems. Current and future systems can employ these techniques to enhance their performance and efficiency. Advisors/Committee Members: Lowenthal, David K (advisor), Strout, Michelle (committeemember), Zhang, Beichuan (committeemember), de Supinski, Bronis R. (committeemember), Mohror, Kathryn (committeemember).

Subjects/Keywords: high-performance computing; network; performance; power; quality-of-service

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Savoie, L. (2019). Inter-Job Optimization in High Performance Computing . (Doctoral Dissertation). University of Arizona. Retrieved from http://hdl.handle.net/10150/634303

Chicago Manual of Style (16th Edition):

Savoie, Lee. “Inter-Job Optimization in High Performance Computing .” 2019. Doctoral Dissertation, University of Arizona. Accessed December 12, 2019. http://hdl.handle.net/10150/634303.

MLA Handbook (7th Edition):

Savoie, Lee. “Inter-Job Optimization in High Performance Computing .” 2019. Web. 12 Dec 2019.

Vancouver:

Savoie L. Inter-Job Optimization in High Performance Computing . [Internet] [Doctoral dissertation]. University of Arizona; 2019. [cited 2019 Dec 12]. Available from: http://hdl.handle.net/10150/634303.

Council of Science Editors:

Savoie L. Inter-Job Optimization in High Performance Computing . [Doctoral Dissertation]. University of Arizona; 2019. Available from: http://hdl.handle.net/10150/634303

.