Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

You searched for subject:(Cache miss rates). One record found.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


University of Texas – Austin

1. Burden, Cassidy Aaron. Evaluating headroom for smart caching policies on GPUs.

Degree: MSin Engineering, Electrical and Computer Engineering, 2018, University of Texas – Austin

This report evaluates two distinct methods of improving the performance of GPU memory systems. Over the past semester, our research has focused on applying a state-of-the-art CPU cache replacement policy on GPUs and exploring headroom of preemptively writing back dirty cache lines. Our first goal is to reduce L1 and L2 cache miss rates on GPU by implementing the Hawkeye cache replacement policy. Hawkeye calculates the optimal cache replacement policy on previous cache accesses in order to train its predictor for future caching decisions. While some benchmarks show performance improvements with Hawkeye, a significant amount of our benchmarks are not sensitive to the performance of the cache. From our experiments, we show that Hawkeye, on average, gives an IPC improvement of 3.57% and 0.56% over Least Recently Used (LRU) when applied to the L1 and L2 caches respectively. We also introduce the idea of precleaning, an alternative to write-back or write-through caching that aims to spread out write bandwidth. Committing L2 writes to main memory when memory congestion is low can hide or lower the performance impact of said write. The idea of precleaning shows promise, but evaluating precleaning fully requires more research in GPU access patterns and prediction techniques. Advisors/Committee Members: Lin, Yun Calvin (advisor).

Subjects/Keywords: GPU; Hawkeye; Precleaning; Caching; Headroom; Cache replacement policy; GPU memory systems; Cache miss rates

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Burden, C. A. (2018). Evaluating headroom for smart caching policies on GPUs. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/65984

Chicago Manual of Style (16th Edition):

Burden, Cassidy Aaron. “Evaluating headroom for smart caching policies on GPUs.” 2018. Masters Thesis, University of Texas – Austin. Accessed March 06, 2021. http://hdl.handle.net/2152/65984.

MLA Handbook (7th Edition):

Burden, Cassidy Aaron. “Evaluating headroom for smart caching policies on GPUs.” 2018. Web. 06 Mar 2021.

Vancouver:

Burden CA. Evaluating headroom for smart caching policies on GPUs. [Internet] [Masters thesis]. University of Texas – Austin; 2018. [cited 2021 Mar 06]. Available from: http://hdl.handle.net/2152/65984.

Council of Science Editors:

Burden CA. Evaluating headroom for smart caching policies on GPUs. [Masters Thesis]. University of Texas – Austin; 2018. Available from: http://hdl.handle.net/2152/65984

.