You searched for subject:(compiler)
.
Showing records 1 – 30 of
564 total matches.
◁ [1] [2] [3] [4] [5] … [19] ▶

University of Waterloo
1.
Selby, Jason W. A.
Unconventional Applications of Compiler Analysis.
Degree: 2011, University of Waterloo
URL: http://hdl.handle.net/10012/6184
► Previously, compiler transformations have primarily focused on minimizing program execution time. This thesis explores some examples of applying compiler technology outside of its original scope.…
(more)
▼ Previously, compiler transformations have primarily focused on
minimizing program execution time. This thesis explores some examples
of applying compiler technology outside of its original scope.
Specifically, we apply compiler analysis to the field of software
maintenance and evolution by examining the use of global data
throughout the lifetimes of many open source projects. Also, we
investigate the effects of compiler optimizations on the power
consumption of small battery powered devices. Finally, in an area
closer to traditional compiler research we examine automatic program
parallelization in the form of thread-level speculation.
Subjects/Keywords: Compiler Analysis; Compiler Optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Selby, J. W. A. (2011). Unconventional Applications of Compiler Analysis. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/6184
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Selby, Jason W A. “Unconventional Applications of Compiler Analysis.” 2011. Thesis, University of Waterloo. Accessed April 18, 2021.
http://hdl.handle.net/10012/6184.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Selby, Jason W A. “Unconventional Applications of Compiler Analysis.” 2011. Web. 18 Apr 2021.
Vancouver:
Selby JWA. Unconventional Applications of Compiler Analysis. [Internet] [Thesis]. University of Waterloo; 2011. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/10012/6184.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Selby JWA. Unconventional Applications of Compiler Analysis. [Thesis]. University of Waterloo; 2011. Available from: http://hdl.handle.net/10012/6184
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Alberta
2.
Garg, Rahul.
A compiler for parallel execution of numerical Python
programs on graphics processing units.
Degree: MS, Department of Computing Science, 2009, University of Alberta
URL: https://era.library.ualberta.ca/files/4j03d0630
► Modern Graphics Processing Units (GPUs) are providing breakthrough performance for numerical computing at the cost of increased programming complexity. Current programming models for GPUs require…
(more)
▼ Modern Graphics Processing Units (GPUs) are providing
breakthrough performance for numerical computing at the cost of
increased programming complexity. Current programming models for
GPUs require that the programmer manually manage the data transfer
between CPU and GPU. This thesis proposes a simpler programming
model and introduces a new compilation framework to enable Python
applications containing numerical computations to be executed on
GPUs and multi-core CPUs. The new programming model minimally
extends Python to include type and parallel-loop annotations. Our
compiler framework then automatically identifies the data to be
transferred between the main memory and the GPU for a particular
class of affine array accesses. The compiler also automatically
performs loop transformations to improve performance on GPUs. For
kernels with regular loop structure and simple memory access
patterns, the GPU code generated by the compiler achieves
significant performance improvement over multi-core CPU
codes.
Subjects/Keywords: gpgpu; compiler
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Garg, R. (2009). A compiler for parallel execution of numerical Python
programs on graphics processing units. (Masters Thesis). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/4j03d0630
Chicago Manual of Style (16th Edition):
Garg, Rahul. “A compiler for parallel execution of numerical Python
programs on graphics processing units.” 2009. Masters Thesis, University of Alberta. Accessed April 18, 2021.
https://era.library.ualberta.ca/files/4j03d0630.
MLA Handbook (7th Edition):
Garg, Rahul. “A compiler for parallel execution of numerical Python
programs on graphics processing units.” 2009. Web. 18 Apr 2021.
Vancouver:
Garg R. A compiler for parallel execution of numerical Python
programs on graphics processing units. [Internet] [Masters thesis]. University of Alberta; 2009. [cited 2021 Apr 18].
Available from: https://era.library.ualberta.ca/files/4j03d0630.
Council of Science Editors:
Garg R. A compiler for parallel execution of numerical Python
programs on graphics processing units. [Masters Thesis]. University of Alberta; 2009. Available from: https://era.library.ualberta.ca/files/4j03d0630

University of Alberta
3.
Xunhao, Li.
Jit4OpenCL: a compiler from Python to OpenCL.
Degree: MS, Department of Computing Science, 2010, University of Alberta
URL: https://era.library.ualberta.ca/files/1v53jx087
► Heterogeneous computing platforms that use GPUs and CPUs in tandem for computation have become an important choice to build low-cost high-performance computing platforms. The computing…
(more)
▼ Heterogeneous computing platforms that use GPUs and
CPUs in tandem for computation have become an important choice to
build low-cost high-performance computing platforms. The computing
ability of modern GPUs surpasses that of CPUs can offer for certain
classes of applications. GPUs can deliver several Tera-Flops in
peak performance. However, programmers must adopt a more
complicated and more difficult new programming paradigm. To
alleviate the burden of programming for heterogeneous systems, Garg
and Amaral developed a Python compiling framework that combines an
ahead-of-time compiler called unPython with a just-in-time compiler
called jit4GPU. This compilation framework generates code for
systems with AMD GPUs. We extend the framework to retarget it to
generate OpenCL code, an industry standard that is implemented for
most GPUs. Therefore, by generating OpenCL code, this new compiler,
called jit4OpenCL, enables the execution of the same program in a
wider selection of heterogeneous platforms. To further improve the
target-code performance on nVidia GPUs, we developed an
array-access analysis tool that helps to exploit the data
reusability by utilizing the shared (local) memory space hierarchy
in OpenCL. The thesis presents an experimental performance
evaluation indicating that, in comparison with jit4GPU, jit4OpenCL
has performance degradation because of the current performance of
implementations of OpenCL, and also because of the extra time
needed for the additional just-in-time compilation. However, the
portable code generated by jit4OpenCL still have performance gains
in some applications compared to highly optimized CPU
code.
Subjects/Keywords: Python; compiler; OpenCL
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xunhao, L. (2010). Jit4OpenCL: a compiler from Python to OpenCL. (Masters Thesis). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/1v53jx087
Chicago Manual of Style (16th Edition):
Xunhao, Li. “Jit4OpenCL: a compiler from Python to OpenCL.” 2010. Masters Thesis, University of Alberta. Accessed April 18, 2021.
https://era.library.ualberta.ca/files/1v53jx087.
MLA Handbook (7th Edition):
Xunhao, Li. “Jit4OpenCL: a compiler from Python to OpenCL.” 2010. Web. 18 Apr 2021.
Vancouver:
Xunhao L. Jit4OpenCL: a compiler from Python to OpenCL. [Internet] [Masters thesis]. University of Alberta; 2010. [cited 2021 Apr 18].
Available from: https://era.library.ualberta.ca/files/1v53jx087.
Council of Science Editors:
Xunhao L. Jit4OpenCL: a compiler from Python to OpenCL. [Masters Thesis]. University of Alberta; 2010. Available from: https://era.library.ualberta.ca/files/1v53jx087

University of Toronto
4.
Calman, Silvian.
Interprocedural Static Single Assignment Form.
Degree: 2011, University of Toronto
URL: http://hdl.handle.net/1807/27573
► Static Single Assignment (SSA) is an Intermediate Representation (IR) that simplifies the design and implementation of analyses and optimizations. While intraprocedural SSA is ubiquitous in…
(more)
▼ Static Single Assignment (SSA) is an Intermediate Representation (IR) that simplifies the design and implementation of analyses and optimizations. While intraprocedural SSA is ubiquitous in modern compilers, the use of interprocedural SSA (ISSA), although seemingly a natural extension, is limited. In this dissertation, we propose new techniques to construct and integrate ISSA into modern compilers and evaluate the benefit of using ISSA form.
First, we present an algorithm that converts the IR into ISSA form by introducing new instructions. To our knowledge, this is the first IR-based ISSA proposed in the literature. Moreover, in comparison to previous work we increase the number of SSA variables, extend the scope of definitions to the whole program, and perform interprocedural copy propagation.
Next, we propose an out-of-ISSA translation that simplifies the integration of ISSA form into a compiler. Our out-of-ISSA translation algorithm enables us to leverage ISSA to improve performance without having to update every compiler pass. Moreover, we demonstrate the benefit of ISSA for a number of compiler optimizations.
Finally, we present an ISSA-based interprocedural induction variable analysis. Our implementation introduces only a few changes to the SSA-based implementation while enabling us to identify considerably more induction variables and compute more loop trip counts.
PhD
Advisors/Committee Members: Zhu, Jianwen, Electrical and Computer Engineering.
Subjects/Keywords: compiler; analysis; 0544
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Calman, S. (2011). Interprocedural Static Single Assignment Form. (Doctoral Dissertation). University of Toronto. Retrieved from http://hdl.handle.net/1807/27573
Chicago Manual of Style (16th Edition):
Calman, Silvian. “Interprocedural Static Single Assignment Form.” 2011. Doctoral Dissertation, University of Toronto. Accessed April 18, 2021.
http://hdl.handle.net/1807/27573.
MLA Handbook (7th Edition):
Calman, Silvian. “Interprocedural Static Single Assignment Form.” 2011. Web. 18 Apr 2021.
Vancouver:
Calman S. Interprocedural Static Single Assignment Form. [Internet] [Doctoral dissertation]. University of Toronto; 2011. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1807/27573.
Council of Science Editors:
Calman S. Interprocedural Static Single Assignment Form. [Doctoral Dissertation]. University of Toronto; 2011. Available from: http://hdl.handle.net/1807/27573

University of Edinburgh
5.
Cummins, Christopher Edward.
Deep learning for compilers.
Degree: PhD, 2020, University of Edinburgh
URL: http://hdl.handle.net/1842/36866
► Constructing compilers is hard. Optimising compilers are multi-million dollar projects spanning years of development, yet remain unable to fully exploit the available performance, and are…
(more)
▼ Constructing compilers is hard. Optimising compilers are multi-million dollar projects spanning years of development, yet remain unable to fully exploit the available performance, and are prone to bugs. The rapid transition to heterogeneous parallelism and diverse architectures has raised demand for aggressively-optimising compilers to an all time high, leaving compiler developers struggling to keep up. What is needed are better tools to simplify compiler construction. This thesis presents new techniques that dramatically lower the cost of compiler construction, while improving robustness and performance. The enabling insight for this research is the leveraging of deep learning to model the correlations between source code and program behaviour, enabling tasks which previously required significant engineering effort to be automated. This is demonstrated in three domains: First, a generative model for compiler benchmarks is developed. The model requires no prior knowledge of programming languages, yet produces output of such quality that professional software developers cannot distinguish generated from hand-written programs. The efficacy of the generator is demonstrated by supplementing the training data of predictive models for compiler optimisations. The generator yields an automatic improvement in heuristic performance, and exposes weaknesses in state-of-the- art approaches which, when corrected, yield further performance improvements. Second, a compiler fuzzer is developed which is far simpler than prior techniques. By learning a generative model rather than engineering a generator from scratch, it is implemented in 100 fewer lines of code than the state-of-the-art, yet is capable of exposing bugs which prior techniques cannot. An extensive testing campaign reveals 67 new bugs in OpenCL compilers, many of which have now been fixed. Finally, this thesis addresses the challenge of feature design. A methodology for learning compiler heuristics is presented that, in contrast to prior approaches, learns directly over the raw textual representation of programs. The approach outperforms state-of-the-art models with hand-engineered features in two challenging optimisation domains, without requiring any expert guidance. Additionally, the methodology enables models trained in one task to be adapted to perform another, permitting the novel transfer of information between optimisation problem domains. The techniques developed in these three contrasting domains demonstrate the exciting potential of deep learning to simplify and improve compiler construction. The outcomes of this thesis enable new lines of research to equip compiler developers to keep up with the rapidly evolving landscape of heterogeneous architectures.
Subjects/Keywords: optimising compilers; compiler construction; deep learning; generative model; compiler fuzzer; compiler heuristics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Cummins, C. E. (2020). Deep learning for compilers. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/36866
Chicago Manual of Style (16th Edition):
Cummins, Christopher Edward. “Deep learning for compilers.” 2020. Doctoral Dissertation, University of Edinburgh. Accessed April 18, 2021.
http://hdl.handle.net/1842/36866.
MLA Handbook (7th Edition):
Cummins, Christopher Edward. “Deep learning for compilers.” 2020. Web. 18 Apr 2021.
Vancouver:
Cummins CE. Deep learning for compilers. [Internet] [Doctoral dissertation]. University of Edinburgh; 2020. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1842/36866.
Council of Science Editors:
Cummins CE. Deep learning for compilers. [Doctoral Dissertation]. University of Edinburgh; 2020. Available from: http://hdl.handle.net/1842/36866

University of Edinburgh
6.
Cummins, Christopher Edward.
Deep learning for compilers.
Degree: PhD, 2020, University of Edinburgh
URL: https://doi.org/10.7488/era/168
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.802366
► Constructing compilers is hard. Optimising compilers are multi-million dollar projects spanning years of development, yet remain unable to fully exploit the available performance, and are…
(more)
▼ Constructing compilers is hard. Optimising compilers are multi-million dollar projects spanning years of development, yet remain unable to fully exploit the available performance, and are prone to bugs. The rapid transition to heterogeneous parallelism and diverse architectures has raised demand for aggressively-optimising compilers to an all time high, leaving compiler developers struggling to keep up. What is needed are better tools to simplify compiler construction. This thesis presents new techniques that dramatically lower the cost of compiler construction, while improving robustness and performance. The enabling insight for this research is the leveraging of deep learning to model the correlations between source code and program behaviour, enabling tasks which previously required significant engineering effort to be automated. This is demonstrated in three domains: First, a generative model for compiler benchmarks is developed. The model requires no prior knowledge of programming languages, yet produces output of such quality that professional software developers cannot distinguish generated from hand-written programs. The efficacy of the generator is demonstrated by supplementing the training data of predictive models for compiler optimisations. The generator yields an automatic improvement in heuristic performance, and exposes weaknesses in state-of-the- art approaches which, when corrected, yield further performance improvements. Second, a compiler fuzzer is developed which is far simpler than prior techniques. By learning a generative model rather than engineering a generator from scratch, it is implemented in 100 fewer lines of code than the state-of-the-art, yet is capable of exposing bugs which prior techniques cannot. An extensive testing campaign reveals 67 new bugs in OpenCL compilers, many of which have now been fixed. Finally, this thesis addresses the challenge of feature design. A methodology for learning compiler heuristics is presented that, in contrast to prior approaches, learns directly over the raw textual representation of programs. The approach outperforms state-of-the-art models with hand-engineered features in two challenging optimisation domains, without requiring any expert guidance. Additionally, the methodology enables models trained in one task to be adapted to perform another, permitting the novel transfer of information between optimisation problem domains. The techniques developed in these three contrasting domains demonstrate the exciting potential of deep learning to simplify and improve compiler construction. The outcomes of this thesis enable new lines of research to equip compiler developers to keep up with the rapidly evolving landscape of heterogeneous architectures.
Subjects/Keywords: optimising compilers; compiler construction; deep learning; generative model; compiler fuzzer; compiler heuristics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Cummins, C. E. (2020). Deep learning for compilers. (Doctoral Dissertation). University of Edinburgh. Retrieved from https://doi.org/10.7488/era/168 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.802366
Chicago Manual of Style (16th Edition):
Cummins, Christopher Edward. “Deep learning for compilers.” 2020. Doctoral Dissertation, University of Edinburgh. Accessed April 18, 2021.
https://doi.org/10.7488/era/168 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.802366.
MLA Handbook (7th Edition):
Cummins, Christopher Edward. “Deep learning for compilers.” 2020. Web. 18 Apr 2021.
Vancouver:
Cummins CE. Deep learning for compilers. [Internet] [Doctoral dissertation]. University of Edinburgh; 2020. [cited 2021 Apr 18].
Available from: https://doi.org/10.7488/era/168 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.802366.
Council of Science Editors:
Cummins CE. Deep learning for compilers. [Doctoral Dissertation]. University of Edinburgh; 2020. Available from: https://doi.org/10.7488/era/168 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.802366

Penn State University
7.
Kislal, Orhan Memduh.
Hardware-Aware Computation Reorganization for Memory Intensive Applications.
Degree: 2018, Penn State University
URL: https://submit-etda.libraries.psu.edu/catalog/15032omk103
► After hitting the power wall, the dramatic change in computer architecture from single core to multicore/manycore brings us new challenges on high performance computing, especially…
(more)
▼ After hitting the power wall, the dramatic change in computer architecture from single core to multicore/manycore brings us new challenges on high performance computing, especially for data intensive applications. Data access costs dominate the execution times of most parallel applications and they are expected to be even more important in the future. Under these circumstances, the organization of data and computation across available resources becomes a major effect on the performance of the overall system. This dissertation explores the reorganization problem from a hardware-aware perspective to fully harness the underlying architecture and demonstrates various methods to improve the memory performance. These methods span both both domain-specific solutions for some memory-intensive kernels of high importance as well as domain-agnostic optimization techniques.
This dissertation approaches the problem of reorganization from two different perspectives. While the traditional methods of organization for data and computation, namely mapping and scheduling, remain highly influential and beneficial; we also evaluate the idea of approximate computing in this context and reorganize data and computation based on their predicted importance. Our exploration includes following steps. On the domain-specific side; we apply mapping, scheduling and data layout reorganization techniques to the sparse matrix vector multiplication problem. In addition, we improve the k-means clustering algorithm with computation reordering as well as multiple skipping heuristics, and propose a cache skipping module for data mining algorithms and explore its benefits with recursive partitioning algorithms. On the domain-agnostic side; we explore location aware and data movement aware computation reorganization techniques, as well as, a code slicing technique that skips high-cost and low-importance data accesses. Our detailed experiments show significant improvements in all cases, up to 25% for domain-specific optimizations and up to 18% for domain-agnostic techniques.
Advisors/Committee Members: Mahmut Taylan Kandemir, Dissertation Advisor/Co-Advisor, Mahmut Taylan Kandemir, Committee Chair/Co-Chair, Kamesh Madduri, Committee Member, John Morgan Sampson, Committee Member, Conrad S Tucker, Outside Member.
Subjects/Keywords: memory; performance; compiler; approximate computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kislal, O. M. (2018). Hardware-Aware Computation Reorganization for Memory Intensive Applications. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/15032omk103
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Kislal, Orhan Memduh. “Hardware-Aware Computation Reorganization for Memory Intensive Applications.” 2018. Thesis, Penn State University. Accessed April 18, 2021.
https://submit-etda.libraries.psu.edu/catalog/15032omk103.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Kislal, Orhan Memduh. “Hardware-Aware Computation Reorganization for Memory Intensive Applications.” 2018. Web. 18 Apr 2021.
Vancouver:
Kislal OM. Hardware-Aware Computation Reorganization for Memory Intensive Applications. [Internet] [Thesis]. Penn State University; 2018. [cited 2021 Apr 18].
Available from: https://submit-etda.libraries.psu.edu/catalog/15032omk103.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Kislal OM. Hardware-Aware Computation Reorganization for Memory Intensive Applications. [Thesis]. Penn State University; 2018. Available from: https://submit-etda.libraries.psu.edu/catalog/15032omk103
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Illinois – Chicago
8.
Casula, Dario.
Witnessing Control Flow Graph Optimizations.
Degree: 2016, University of Illinois – Chicago
URL: http://hdl.handle.net/10027/20974
► Proving the correctness of a program transformation, and specifically, of a compiler op- timization, is a long-standing research problem. Trusting the compiler requires to guarantee…
(more)
▼ Proving the correctness of a program transformation, and specifically, of a
compiler op- timization, is a long-standing research problem. Trusting the
compiler requires to guarantee that the properties verified on the source program hold for the compiled target-code as well. Thus, the primary objective of formal correctness verification is to preserve the semantics of the source code, maintaining untouched its logical behavior.
Traditional methods for formal correctness verification are not convenient to validate large and complex programs like compilers, and intensive testing, despite its proven efficacy, cannot guarantee the absence of bugs.
This thesis is part of a larger on-going research project aimed to demonstrate the feasibility to overcome the difficulties of traditional formal methods. K. Namjoshi and L. Zuck propose a new methodology for creating an automated proof to guarantee the correctness of every execution of an optimization. A witness is a run-time generated relation between the source code and the target code, before and after the transformation. The relation is able to represent all the properties that must be valid throughout the optimization, offering a mathematical formula to prove, through a SMT-Solver (typically Microsoft Z3), if the invariants hold and the semantics is preserved. This work is a further step towards the implementation of a witnessing
compiler: the SimplifyCFG pass of the LLVM
compiler framework is augmented with a witness generator procedure which constructs, run-time, the relations to prove the correctness of every single simplification in the control flow graph, performed by the
compiler.
We show that it is feasible to augment the SimplifyCFG pass with a witness generation procedure. We describe the structure of the code and the mathematical relations designed to demonstrate the correctness of a transformation on the Control Flow Graph. Benchmarks and tests will prove the correct behavior of our implementation and the effectiveness of the witness- ing procedure. We provide details about the witnesses and the results of the benchmarks.
Advisors/Committee Members: Zuck, Lenore D. (advisor), Gjomemo, Rigel (committee member), Santambrogio, Marco (committee member).
Subjects/Keywords: llvm; witness; z3; CFG; compiler
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Casula, D. (2016). Witnessing Control Flow Graph Optimizations. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/20974
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Casula, Dario. “Witnessing Control Flow Graph Optimizations.” 2016. Thesis, University of Illinois – Chicago. Accessed April 18, 2021.
http://hdl.handle.net/10027/20974.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Casula, Dario. “Witnessing Control Flow Graph Optimizations.” 2016. Web. 18 Apr 2021.
Vancouver:
Casula D. Witnessing Control Flow Graph Optimizations. [Internet] [Thesis]. University of Illinois – Chicago; 2016. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/10027/20974.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Casula D. Witnessing Control Flow Graph Optimizations. [Thesis]. University of Illinois – Chicago; 2016. Available from: http://hdl.handle.net/10027/20974
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Delaware
9.
Chen, Yuanfang.
Software simultaneous multithreading through compilation.
Degree: PhD, University of Delaware, Department of Electrical and Computer Engineering, 2018, University of Delaware
URL: http://udspace.udel.edu/handle/19716/23594
► With the Dennard Scaling law break for a long time, the computer architecture design progress towards the wider rather than deeper organization. There are three…
(more)
▼ With the Dennard Scaling law break for a long time, the computer architecture design progress towards the wider rather than deeper organization. There are three ways to design wider architecture: 1. Putting more cores on the die to utilize thread level parallelism(TLP); 2. Putting more execution ports in the pipeline to utilize instruction level parallelism(ILP); 3. Making vector register wider to utilize data level parallelism(DLP). To speed up a wide spectrum of applications, modern CPU processors usually have all these characteristics at the same time. However, not all applications could make effective use of these characteristics simultaneously. To efficiently use any of these is still a challenging problem in the optimizing
compiler research community even though these problems are not new. Processor architect designed simultaneous multithreading (SMT) to alleviate the problem. ☐ Simultaneous multithreading is an essential technique for improving pipeline resource utilization and the overall power efficiency of chips especially when the processor is either wide or comprised of an in-order pipeline. For a wide-issue superscalar processor, there are two kinds of wasted issue slots: vertical waste where all issue slots in a cycle are empty; and horizontal waste where the issue slots in a cycle are partially empty [74]. Simultaneous multithreading, contrary to its other two counterparts: fine- grained multithreading and coarse-grained multithreading, can fill both vertical and horizontal waste, hence enhancing the overall efficiency. From the user applications point of view, there are two ways to improve the speed or the throughput: thread level parallelism (TLP) and instruction level parallelism (ILP). Simultaneous multithreading can exploit both TLP and ILP in the same cycle whereas fine-grained or coarse-grained multithreading can only exploit either TLP or ILP in a single cycle. ☐ Despite all the benefits brought by simultaneous multithreading (SMT), it's adopted by semiconductor chip makers at a slow pace. AMD most recent Zen processor is its first CPU product featuring SMT. The only other well-known chip makers that offer SMT enabled processors are Intel and IBM. The reason for this is that SMT is very complex to implement. Many of the pipeline stages and memory system need hardware logic to have an efficient SMT implementation. For embedded chips, SMT is not even an affordable choice. ☐ To harvest the benefits provided by SMT with incurring significant hardware costs, we propose a
Compiler Based SMT implementation framework called CSSMT that achieves comparable performance to hardware-based SMT. With the help of advanced profiling techniques enabled by precise PMU counters in modern CPU, CSSMT can identify those applications that could potentially benefit from SMT and guide our LLVM based
compiler to merge the hot spots in respective threads co-running in the same pipeline. CSSMT is orthogonal to the effect of hardware SMT and can bail out when the merging is not profitable based on its cost model derived…
Advisors/Committee Members: Li, Xiaoming.
Subjects/Keywords: Applied sciences; Compiler; LLVM; SMT
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chen, Y. (2018). Software simultaneous multithreading through compilation. (Doctoral Dissertation). University of Delaware. Retrieved from http://udspace.udel.edu/handle/19716/23594
Chicago Manual of Style (16th Edition):
Chen, Yuanfang. “Software simultaneous multithreading through compilation.” 2018. Doctoral Dissertation, University of Delaware. Accessed April 18, 2021.
http://udspace.udel.edu/handle/19716/23594.
MLA Handbook (7th Edition):
Chen, Yuanfang. “Software simultaneous multithreading through compilation.” 2018. Web. 18 Apr 2021.
Vancouver:
Chen Y. Software simultaneous multithreading through compilation. [Internet] [Doctoral dissertation]. University of Delaware; 2018. [cited 2021 Apr 18].
Available from: http://udspace.udel.edu/handle/19716/23594.
Council of Science Editors:
Chen Y. Software simultaneous multithreading through compilation. [Doctoral Dissertation]. University of Delaware; 2018. Available from: http://udspace.udel.edu/handle/19716/23594

Brno University of Technology
10.
Hranáč, Jan.
Překlad do různých asemblerů: Translation to Various Assembly Languages.
Degree: 2019, Brno University of Technology
URL: http://hdl.handle.net/11012/55429
► The goal of this project is to create a compiler capable of compilation of the input language into various assemblers (by the choice of the…
(more)
▼ The goal of this project is to create a
compiler capable of compilation of the input language into various assemblers (by the choice of the user). This will be achieved by expandibility of the compilator by modules implementing the building of the source files of the concrete types of assembler. The compilator will serve as a generator of parts of assembler source codes to make the work of assembler programmer easier. The input language is derived from Pascal but is closer to assembler then canonical Pascal.
Advisors/Committee Members: Meduna, Alexandr (advisor), Goldefus, Filip (referee).
Subjects/Keywords: překladač; asembler; compiler; assembler
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hranáč, J. (2019). Překlad do různých asemblerů: Translation to Various Assembly Languages. (Thesis). Brno University of Technology. Retrieved from http://hdl.handle.net/11012/55429
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Hranáč, Jan. “Překlad do různých asemblerů: Translation to Various Assembly Languages.” 2019. Thesis, Brno University of Technology. Accessed April 18, 2021.
http://hdl.handle.net/11012/55429.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Hranáč, Jan. “Překlad do různých asemblerů: Translation to Various Assembly Languages.” 2019. Web. 18 Apr 2021.
Vancouver:
Hranáč J. Překlad do různých asemblerů: Translation to Various Assembly Languages. [Internet] [Thesis]. Brno University of Technology; 2019. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/11012/55429.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Hranáč J. Překlad do různých asemblerů: Translation to Various Assembly Languages. [Thesis]. Brno University of Technology; 2019. Available from: http://hdl.handle.net/11012/55429
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Oregon
11.
Clauson, Aran.
Search-based Optimization for Compiler Machine-code Generation.
Degree: PhD, Department of Computer and Information Science, 2013, University of Oregon
URL: http://hdl.handle.net/1794/13433
► Compilation encompasses many steps. Parsing turns the input program into a more manageable syntax tree. Verification ensures that the program makes some semblance of sense.…
(more)
▼ Compilation encompasses many steps. Parsing turns the input program into a more manageable syntax tree. Verification ensures that the program makes some semblance of sense. Finally, code generation transforms the internal abstract program representation into an executable program. Compilers strive to produce the best possible programs. Optimizations are applied at nearly every level of compilation.
Instruction Scheduling is one of the last compilation tasks. It is part of code generation. Instruction Scheduling replaces the internal graph representation of the program with an instruction sequence. The scheduler should produce some sequence that the hardware can execute quickly. Considering that Instruction Scheduling is an NP-Complete optimization problem, it is interesting that schedules are usually generated by a greedy, heuristic algorithm called List Scheduling.
Given search-based algorithms' successes in other NP-Complete optimization domains, we ask whether search-based algorithms can be applied to Instruction Scheduling to generate superior schedules without unacceptably increasing compilation time.
To answer this question, we formulate a problem description that captures practical scheduling constraints. We show that this problem is NP-Complete given modest requirements on the actual hardware. We adapt three different search algorithms to Instruction Scheduling in order to show that search is an effective Instruction Scheduling technique. The schedules generated by our algorithms are generally shorter than those generated by List Scheduling. Search-based scheduling does take more time, but the increases are acceptable for some compilation domains.
Advisors/Committee Members: Wilson, Christopher (advisor).
Subjects/Keywords: Code-generation; Compiler; Search
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Clauson, A. (2013). Search-based Optimization for Compiler Machine-code Generation. (Doctoral Dissertation). University of Oregon. Retrieved from http://hdl.handle.net/1794/13433
Chicago Manual of Style (16th Edition):
Clauson, Aran. “Search-based Optimization for Compiler Machine-code Generation.” 2013. Doctoral Dissertation, University of Oregon. Accessed April 18, 2021.
http://hdl.handle.net/1794/13433.
MLA Handbook (7th Edition):
Clauson, Aran. “Search-based Optimization for Compiler Machine-code Generation.” 2013. Web. 18 Apr 2021.
Vancouver:
Clauson A. Search-based Optimization for Compiler Machine-code Generation. [Internet] [Doctoral dissertation]. University of Oregon; 2013. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1794/13433.
Council of Science Editors:
Clauson A. Search-based Optimization for Compiler Machine-code Generation. [Doctoral Dissertation]. University of Oregon; 2013. Available from: http://hdl.handle.net/1794/13433

University of Notre Dame
12.
Peter James Bui.
A Compiler Toolchain for Distributed Data Intensive
Scientific Workflows</h1>.
Degree: Computer Science and Engineering, 2012, University of Notre Dame
URL: https://curate.nd.edu/show/pk02c823v2f
► With the growing amount of computational resources available to researchers today and the explosion of scientific data in modern research, it is imperative that…
(more)
▼ With the growing amount of computational
resources available to researchers today and the explosion of
scientific data in modern research, it is imperative that
scientists be able to construct data processing applications that
harness these vast computing systems. To address this need, I
propose applying concepts from traditional compilers, linkers, and
profilers to the construction of distributed workflows and evaluate
this approach by implementing a
compiler toolchain that allows
users to compose scientific workflows in a high-level programming
language. In this dissertation, I describe the
execution and programming model of this
compiler toolchain. Next, I
examine four
compiler optimizations and evaluate their
effectiveness at improving the performance of various distributed
workflows. Afterwards, I present a set of linking utilities for
packaging workflows and a group of profiling tools for analyzing
and debugging workflows. Finally, I discuss modifications made to
the run-time system to support features such as enhanced provenance
information and garbage collection. Altogether, these components
form a
compiler toolchain that demonstrates the effectiveness of
applying traditional
compiler techniques to the challenges of
constructing distributed data intensive scientific workflows.
Advisors/Committee Members: Patrick Flynn, Committee Member, Scott Emrich, Committee Member, Douglas Thain, Committee Chair, Jesus Izaguirre, Committee Member.
Subjects/Keywords: compiler; distributed systems; workflows; python
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bui, P. J. (2012). A Compiler Toolchain for Distributed Data Intensive
Scientific Workflows</h1>. (Thesis). University of Notre Dame. Retrieved from https://curate.nd.edu/show/pk02c823v2f
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Bui, Peter James. “A Compiler Toolchain for Distributed Data Intensive
Scientific Workflows</h1>.” 2012. Thesis, University of Notre Dame. Accessed April 18, 2021.
https://curate.nd.edu/show/pk02c823v2f.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Bui, Peter James. “A Compiler Toolchain for Distributed Data Intensive
Scientific Workflows</h1>.” 2012. Web. 18 Apr 2021.
Vancouver:
Bui PJ. A Compiler Toolchain for Distributed Data Intensive
Scientific Workflows</h1>. [Internet] [Thesis]. University of Notre Dame; 2012. [cited 2021 Apr 18].
Available from: https://curate.nd.edu/show/pk02c823v2f.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Bui PJ. A Compiler Toolchain for Distributed Data Intensive
Scientific Workflows</h1>. [Thesis]. University of Notre Dame; 2012. Available from: https://curate.nd.edu/show/pk02c823v2f
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
13.
Hebert, Chris.
Inferring Types to Eliminate Ownership Checks in an Intentional JavaScript Compiler.
Degree: MS, 2015, University of New Hampshire
URL: https://scholars.unh.edu/thesis/1021
► Concurrent programs are notoriously difficult to develop due to the non-deterministic nature of thread scheduling. It is desirable to have a programming language to make…
(more)
▼ Concurrent programs are notoriously difficult to develop due to the non-deterministic nature of thread scheduling. It is desirable to have a programming language to make such development easier. Tscript comprises such a system. Tscript is an extension of JavaScript that provides multithreading support along with intent specification. These intents allow a programmer to specify how parts of the program interact in a multithreaded context. However, enforcing intents requires run-time memory checks which can be inefficient. This thesis implements an optimization in the Tscript
compiler that seeks to improve this inefficiency through static analysis. Our approach utilizes both type inference and dataflow analysis to eliminate unnecessary run-time checks.
Advisors/Committee Members: Phil Hatcher, Michel Charpentier, Wheeler Ruml.
Subjects/Keywords: compiler; javascript; optimization; Computer science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hebert, C. (2015). Inferring Types to Eliminate Ownership Checks in an Intentional JavaScript Compiler. (Thesis). University of New Hampshire. Retrieved from https://scholars.unh.edu/thesis/1021
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Hebert, Chris. “Inferring Types to Eliminate Ownership Checks in an Intentional JavaScript Compiler.” 2015. Thesis, University of New Hampshire. Accessed April 18, 2021.
https://scholars.unh.edu/thesis/1021.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Hebert, Chris. “Inferring Types to Eliminate Ownership Checks in an Intentional JavaScript Compiler.” 2015. Web. 18 Apr 2021.
Vancouver:
Hebert C. Inferring Types to Eliminate Ownership Checks in an Intentional JavaScript Compiler. [Internet] [Thesis]. University of New Hampshire; 2015. [cited 2021 Apr 18].
Available from: https://scholars.unh.edu/thesis/1021.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Hebert C. Inferring Types to Eliminate Ownership Checks in an Intentional JavaScript Compiler. [Thesis]. University of New Hampshire; 2015. Available from: https://scholars.unh.edu/thesis/1021
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Georgia Tech
14.
Gupta, Meghana.
Code generation and adaptive control divergence management for light weight SIMT processors.
Degree: MS, Computer Science, 2016, Georgia Tech
URL: http://hdl.handle.net/1853/55044
► The energy costs of data movement are limiting the performance scaling of future generations of high performance computing architectures targeted to data intensive applications. The…
(more)
▼ The energy costs of data movement are limiting the performance scaling of future generations of high performance computing architectures targeted to data intensive applications. The result has been a resurgence in the interest in processing-in-memory (PIM) architectures. This challenge has spawned the development of a scalable, parametric data parallel architecture referred at the Heterogeneous Architecture Research Prototype (HARP) - a single instruction multiple thread (SIMT) architecture for integration into DRAM systems, particularly 3D memory stacks as a distinct processing layer to exploit the enormous internal memory bandwidth. However, this potential can only be realized with an optimizing compilation environment. This thesis addresses this challenge by i) the construction of an open source
compiler for HARP, and ii) integrating optimizations for handling control flow divergence for HARP instances. The HARP
compiler is built using the LLVM open source
compiler infrastructure. Apart from traditional code generation, the HARP
compiler backend handles unique challenges associated with the HARP instruction set. Chief among them is code generation for control divergence management techniques. The HARP architecture and
compiler supports i) a hardware reconvergence stack and ii) predication to handle divergent branches. The HARP
compiler addresses several challenges associated with generating code for these two control divergence management techniques and implements multiple analyses and transformations for code generation. Both of these techniques have unique advantages and disadvantages depending upon whether the conditional branch is likely to be unanimous or not. Two decision frameworks, guided by static analysis and dynamic profile information are implemented to choose between the control divergence management techniques by analyzing the nature of the conditional branches and utilizing this information during compilation.
Advisors/Committee Members: Yalamanchili, Sudhakar (advisor), Kim, Hyesoon (committee member), Pande, Santosh (committee member).
Subjects/Keywords: Compiler; SIMT; Control divergence
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gupta, M. (2016). Code generation and adaptive control divergence management for light weight SIMT processors. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/55044
Chicago Manual of Style (16th Edition):
Gupta, Meghana. “Code generation and adaptive control divergence management for light weight SIMT processors.” 2016. Masters Thesis, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/55044.
MLA Handbook (7th Edition):
Gupta, Meghana. “Code generation and adaptive control divergence management for light weight SIMT processors.” 2016. Web. 18 Apr 2021.
Vancouver:
Gupta M. Code generation and adaptive control divergence management for light weight SIMT processors. [Internet] [Masters thesis]. Georgia Tech; 2016. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/55044.
Council of Science Editors:
Gupta M. Code generation and adaptive control divergence management for light weight SIMT processors. [Masters Thesis]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/55044

University of Edinburgh
15.
Chandramohan, Kiran.
Mapping parallelism to heterogeneous processors.
Degree: PhD, 2016, University of Edinburgh
URL: http://hdl.handle.net/1842/22028
► Most embedded devices are based on heterogeneous Multiprocessor System on Chips (MPSoCs). These contain a variety of processors like CPUs, micro-controllers, DSPs, GPUs and specialised…
(more)
▼ Most embedded devices are based on heterogeneous Multiprocessor System on Chips (MPSoCs). These contain a variety of processors like CPUs, micro-controllers, DSPs, GPUs and specialised accelerators. The heterogeneity of these systems helps in achieving good performance and energy efficiency but makes programming inherently difficult. There is no single programming language or runtime to program such platforms. This thesis makes three contributions to these problems. First, it presents a framework that allows code in Single Program Multiple Data (SPMD) form to be mapped to a heterogeneous platform. The mapping space is explored, and it is shown that the best mapping depends on the metric used. Next, a compiler framework is presented which bridges the gap between the high -level programming model of OpenMP and the heterogeneous resources of MPSoCs. It takes OpenMP programs and generates code which runs on all processors. It delivers programming ease while exploiting heterogeneous resources. Finally, a compiler-based approach to runtime power management for heterogeneous cores is presented. Given an externally provided budget, the approach generates heterogeneous, partitioned code that attempts to give the best performance within that budget.
Subjects/Keywords: 004; heterogeneous processors; compiler
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chandramohan, K. (2016). Mapping parallelism to heterogeneous processors. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/22028
Chicago Manual of Style (16th Edition):
Chandramohan, Kiran. “Mapping parallelism to heterogeneous processors.” 2016. Doctoral Dissertation, University of Edinburgh. Accessed April 18, 2021.
http://hdl.handle.net/1842/22028.
MLA Handbook (7th Edition):
Chandramohan, Kiran. “Mapping parallelism to heterogeneous processors.” 2016. Web. 18 Apr 2021.
Vancouver:
Chandramohan K. Mapping parallelism to heterogeneous processors. [Internet] [Doctoral dissertation]. University of Edinburgh; 2016. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1842/22028.
Council of Science Editors:
Chandramohan K. Mapping parallelism to heterogeneous processors. [Doctoral Dissertation]. University of Edinburgh; 2016. Available from: http://hdl.handle.net/1842/22028
16.
Mitropoulou, Konstantina.
Performance optimizations for compiler-based error detection.
Degree: PhD, 2015, University of Edinburgh
URL: http://hdl.handle.net/1842/10473
► The trend towards smaller transistor technologies and lower operating voltages stresses the hardware and makes transistors more susceptible to transient errors. In future systems, performance…
(more)
▼ The trend towards smaller transistor technologies and lower operating voltages stresses the hardware and makes transistors more susceptible to transient errors. In future systems, performance and power gains will come at the cost of unreliable areas on the chip. For this reason, there is an increased need for low-overhead highly-reliable error detection methodologies. In the last years, several techniques have been proposed. The majority of them are based on redundancy which can be implemented at several levels (e.g., hardware, instruction, thread, process, etc). In instruction-level error detection approaches, the compiler replicates the instructions of the program and inserts checks wherever they are needed. The checks evaluate code correctness and decide whether or not an error has occurred. This type of error detection is more flexible than the hardware alternatives. It allows the programmer to choose the protected area of the program and it can be applied without any hardware modifications. On the other hand, the replicated instructions and the checks cause a large slowdown making software techniques less appealing. In this thesis, we propose two techniques that aim at reducing the error detection overhead of compiler-based approaches and improving system’s performance without sacrificing the fault-coverage. The first technique, DRIFT, achieves this by decoupling the execution of the code (original and replicated) from the checks. The checks are compare and jump instructions. The latter ones tend to make the code sequential and prohibit the compiler from performing aggressive instruction scheduling optimizations. We call this phenomenon basic-block fragmentation. DRIFT reduces the impact of basic-block fragmentation by breaking the synchronized execute-check-confirm-execute cycle. In this way, DRIFT generates a scheduler-friendly code with more instruction-level parallelism (ILP). As a result, it reduces the performance overhead down to 1.29× (on average) and outperforms the state-of-the-art by up to 29.7% retaining the same fault-coverage. Next, CASTED focuses on reducing the impact of error detection overhead on single-chip scalable architectures that are composed of tightly-coupled cores. The proposed compiler methodology adaptively distributes the error detection overhead to the available resources across multiple cores, fully exploiting the abundant ILP of these architectures. CASTED adapts to a wide range of architecture configurations (issue-width, inter-core communication). The results show that CASTED matches the performance of, and often outperforms, sometimes by as mush as 21.2%, the best fixed state-of-the-art approach while maintaining the same fault coverage.
Subjects/Keywords: 005.75; fault tolerance; compiler
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mitropoulou, K. (2015). Performance optimizations for compiler-based error detection. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/10473
Chicago Manual of Style (16th Edition):
Mitropoulou, Konstantina. “Performance optimizations for compiler-based error detection.” 2015. Doctoral Dissertation, University of Edinburgh. Accessed April 18, 2021.
http://hdl.handle.net/1842/10473.
MLA Handbook (7th Edition):
Mitropoulou, Konstantina. “Performance optimizations for compiler-based error detection.” 2015. Web. 18 Apr 2021.
Vancouver:
Mitropoulou K. Performance optimizations for compiler-based error detection. [Internet] [Doctoral dissertation]. University of Edinburgh; 2015. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1842/10473.
Council of Science Editors:
Mitropoulou K. Performance optimizations for compiler-based error detection. [Doctoral Dissertation]. University of Edinburgh; 2015. Available from: http://hdl.handle.net/1842/10473

Louisiana State University
17.
Hanagodimath, Pratik Prabhu.
Performance Comparison Between Patus and Pluto Compilers on Stencils.
Degree: MSEE, Electrical and Computer Engineering, 2014, Louisiana State University
URL: etd-04142014-090546
;
https://digitalcommons.lsu.edu/gradschool_theses/2636
Comparing the performances of Patus and Pluto compilers on stencil applications. Stencils are written in Jacobi and Seidel style of coding and performances of both these compilers are analysed based on these coding styles.
Subjects/Keywords: Compiler optimization; parallel execution.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hanagodimath, P. P. (2014). Performance Comparison Between Patus and Pluto Compilers on Stencils. (Masters Thesis). Louisiana State University. Retrieved from etd-04142014-090546 ; https://digitalcommons.lsu.edu/gradschool_theses/2636
Chicago Manual of Style (16th Edition):
Hanagodimath, Pratik Prabhu. “Performance Comparison Between Patus and Pluto Compilers on Stencils.” 2014. Masters Thesis, Louisiana State University. Accessed April 18, 2021.
etd-04142014-090546 ; https://digitalcommons.lsu.edu/gradschool_theses/2636.
MLA Handbook (7th Edition):
Hanagodimath, Pratik Prabhu. “Performance Comparison Between Patus and Pluto Compilers on Stencils.” 2014. Web. 18 Apr 2021.
Vancouver:
Hanagodimath PP. Performance Comparison Between Patus and Pluto Compilers on Stencils. [Internet] [Masters thesis]. Louisiana State University; 2014. [cited 2021 Apr 18].
Available from: etd-04142014-090546 ; https://digitalcommons.lsu.edu/gradschool_theses/2636.
Council of Science Editors:
Hanagodimath PP. Performance Comparison Between Patus and Pluto Compilers on Stencils. [Masters Thesis]. Louisiana State University; 2014. Available from: etd-04142014-090546 ; https://digitalcommons.lsu.edu/gradschool_theses/2636

University of Georgia
18.
Li, Nan.
Energy-efficient program layout for multi-bank architectures.
Degree: 2014, University of Georgia
URL: http://hdl.handle.net/10724/21321
► Energy conservation is an important problem for battery-powered embedded or portable systems. New technology such as RDRAM enables memory to operate at different power levels.…
(more)
▼ Energy conservation is an important problem for battery-powered embedded or portable systems. New technology such as RDRAM enables memory to operate at different power levels. This allows memory to be partitioned such that only the necessary
parts of memory are in active mode, while the others are in low power modes. The traditional program layout has code and heap at one end, and stack at the other end. However, it is possible to place the code in the high address range (next to the stack)
without loss of functionality. This thesis explores the energy impact of those two layouts in a partitioned power aware memory system and presents a static program analysis technique to predict the more energy-efficient layout for a given program. We
verify the effectiveness of our analysis by running MiBench programs on an enhanced Simplescalar-based power simulator. Our experimental results show that the new layout saves up to 43% of the memory subsystem energy when compared to the traditional
layout, with an average improvement of 12%, on a cache-less CPU(such as the widely used ARM7TDMI) . Our static analysis correctly predicts 13 out of 15 benchmarks from the MiBench suite. We also evaluate our scheme on a processor with several cache
configurations. The cached configurations benefit much less (averaging less than 1%), though some programs see as much as a 25% energy savings from the non-traditional layout.
Subjects/Keywords: Compiler; Linker; Energy Saving; RDRAM
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, N. (2014). Energy-efficient program layout for multi-bank architectures. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/21321
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Li, Nan. “Energy-efficient program layout for multi-bank architectures.” 2014. Thesis, University of Georgia. Accessed April 18, 2021.
http://hdl.handle.net/10724/21321.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Li, Nan. “Energy-efficient program layout for multi-bank architectures.” 2014. Web. 18 Apr 2021.
Vancouver:
Li N. Energy-efficient program layout for multi-bank architectures. [Internet] [Thesis]. University of Georgia; 2014. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/10724/21321.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Li N. Energy-efficient program layout for multi-bank architectures. [Thesis]. University of Georgia; 2014. Available from: http://hdl.handle.net/10724/21321
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
19.
Liu, Qingrui.
Compiler-Directed Error Resilience for Reliable Computing.
Degree: PhD, Computer Engineering, 2018, Virginia Tech
URL: http://hdl.handle.net/10919/84526
► Error resilience has become as important as power and performance in modern computing architecture. There are various sources of errors that can paralyze real-world computing…
(more)
▼ Error resilience has become as important as power and performance in modern computing
architecture. There are various sources of errors that can paralyze real-world computing
systems. Of particular interest to this dissertation are single-event errors. They can be
the results of energetic particle strike or abrupt power outage that corrupts the program
states leading to system failures. Specifically, energetic particle strike is the major cause of
soft error while abrupt power outage can result in memory inconsistency in the nonvolatile
memory systems.
Unfortunately, existing techniques to handle those single-event errors are either resource consuming (e.g., hardware approaches) or heavy-weight (e.g., software approaches). To
address this problem, this dissertation identifies idempotent processing as an alternative recovery technique to handle the system failures in an efficient and low-cost manner. Then, this
dissertation first proposes to design and develop a
compiler-directed lightweight methodology which leverages idempotent processing and the state-of-the-art sensor-based detection to
achieve soft error resilience at low-cost. This dissertation also introduces a lightweight soft error tolerant hardware design that redefines idempotent processing where the idempotent
regions can be created, verified and recovered from the processor's point of view. Furthermore, this dissertation proposes a series of
compiler optimizations that significantly reduce
the hardware and runtime overhead of the idempotent processing. Lastly, this dissertation
proposes a failure-atomic system integrated with idempotent processing to resolve another
type of single-event error, i.e., failure-induced memory inconsistency in the nonvolatile memory systems.
Advisors/Committee Members: Jung, Changhee (committeechair), Zeng, Haibo (committeechair), Ravindran, Binoy (committee member), Schaumont, Patrick Robert (committee member), Min, Chang Woo (committee member).
Subjects/Keywords: Reliability; Compiler Optimization; Computer Architecture
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, Q. (2018). Compiler-Directed Error Resilience for Reliable Computing. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/84526
Chicago Manual of Style (16th Edition):
Liu, Qingrui. “Compiler-Directed Error Resilience for Reliable Computing.” 2018. Doctoral Dissertation, Virginia Tech. Accessed April 18, 2021.
http://hdl.handle.net/10919/84526.
MLA Handbook (7th Edition):
Liu, Qingrui. “Compiler-Directed Error Resilience for Reliable Computing.” 2018. Web. 18 Apr 2021.
Vancouver:
Liu Q. Compiler-Directed Error Resilience for Reliable Computing. [Internet] [Doctoral dissertation]. Virginia Tech; 2018. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/10919/84526.
Council of Science Editors:
Liu Q. Compiler-Directed Error Resilience for Reliable Computing. [Doctoral Dissertation]. Virginia Tech; 2018. Available from: http://hdl.handle.net/10919/84526

University of New South Wales
20.
Sewell, Thomas.
Translation validation for verified, efficient and timely operating systems.
Degree: Computer Science & Engineering, 2017, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/58861
;
https://unsworks.unsw.edu.au/fapi/datastream/unsworks:47819/SOURCE02?view=true
► Computer software is typically written in one language and then translatedout of that language into the native binary languages of the machines thesoftware will run…
(more)
▼ Computer software is typically written in one language and then translatedout of that language into the native binary languages of the machines thesoftware will run on. Most operating systems, for instance, are written in thelow-level language C and translated by a C
compiler. Translationvalidation is the act of checking that this translation is correct. Thisdissertation presents an approach and framework for validating the translationof C programs, and three experiments which test the approach. Our validation approach consists of three components, a frontend, a backend anda core, which broadly mirrors the design of the C
compiler. The threeexperiments in this dissertation exercise these three components. Each of thesecomponents produces a formal proof of refinement, and theserefinement proofs compose to produce a proof that the binary is a refinement ofthe source semantics. This notion of refinement can then compose withcorrectness proofs for a C program, resulting in a verified binary.Throughout this work, our case study of interest will be the seL4 verifiedoperating system kernel, compiled for the ARM instruction-set architecture, forwhich we will produce a verified efficient binary.The thesis of this work is that our translation validation approachoffers us great flexibility. We can quickly produce verified binariesproduced via many complex transformations without specifically addressing eachsuch transformation. We can adapt our frontend to handle low-level source codewhich does not strictly respect the rules of the C language it is written in.We can also retarget our backend to address important timing concerns as wellas correctness ones.
Advisors/Committee Members: Klein, Gerwin, Computer Science & Engineering, Faculty of Engineering, UNSW, Keller, Gabriele, Computer Science & Engineering, Faculty of Engineering, UNSW.
Subjects/Keywords: Compiler; Translation Validation; Operating System
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sewell, T. (2017). Translation validation for verified, efficient and timely operating systems. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/58861 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:47819/SOURCE02?view=true
Chicago Manual of Style (16th Edition):
Sewell, Thomas. “Translation validation for verified, efficient and timely operating systems.” 2017. Doctoral Dissertation, University of New South Wales. Accessed April 18, 2021.
http://handle.unsw.edu.au/1959.4/58861 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:47819/SOURCE02?view=true.
MLA Handbook (7th Edition):
Sewell, Thomas. “Translation validation for verified, efficient and timely operating systems.” 2017. Web. 18 Apr 2021.
Vancouver:
Sewell T. Translation validation for verified, efficient and timely operating systems. [Internet] [Doctoral dissertation]. University of New South Wales; 2017. [cited 2021 Apr 18].
Available from: http://handle.unsw.edu.au/1959.4/58861 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:47819/SOURCE02?view=true.
Council of Science Editors:
Sewell T. Translation validation for verified, efficient and timely operating systems. [Doctoral Dissertation]. University of New South Wales; 2017. Available from: http://handle.unsw.edu.au/1959.4/58861 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:47819/SOURCE02?view=true

University of New South Wales
21.
Ye, Ding.
Accelerating Dynamic Detection of Memory Errors for C Programs via Static Analysis.
Degree: Computer Science & Engineering, 2015, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/54507
;
https://unsworks.unsw.edu.au/fapi/datastream/unsworks:35127/SOURCE02?view=true
► Memory errors in C programs are the root causes of many defects and vulnerabilitiesin software engineering. Among the available error detection techniques,dynamic analysis is widely…
(more)
▼ Memory errors in C programs are the root causes of many defects and vulnerabilitiesin software engineering. Among the available error detection techniques,dynamic analysis is widely used in industries due to its high precision. Unfortunately,existing approaches su↵er from considerable runtime overheads, owing tounguided and overly conservative instrumentation. With the massive growth ofsoftware nowadays, such inefficiency prevents testing with comprehensive programinputs, leaving some input-specific memory errors undetected.This thesis presents novel techniques to address the efficiency problem by eliminatingsome unnecessary instrumentation guided by static analysis. Targeting twomajor types of memory errors, the research has developed two tools, Usher andWPBound, both implemented in the LLVM
compiler infrastructure, to acceleratethe dynamic detection.To facilitate efficient detection of undefined value uses, Usher infers the definednessof values using a value-flow graph that captures def-use information forboth top-level and address-taken variables interprocedurally, and removes unnecessaryinstrumentation by solving a graph reachability problem. Usher works wellwith any pointer analysis (done a priori) and enables advanced instrumentationreducingoptimizations.For efficient detection of spatial errors (e.g., bu↵er overflows), WPBound enhances the performance by reducing unnecessary bounds checks. The basic ideais to guard a bounds check at a memory access inside a loop, where the guard iscomputed outside the loop based on the notion of weakest precondition. The falsehoodof the guard implies the absence of out-of-bounds errors at the dereference,thereby avoiding the corresponding bounds check inside the loop.For each tool, this thesis presents the methodology and evaluates the implementationwith a set of C benchmarks. Their e↵ectiveness is demonstrated withsignificant speedups over the state-of-the-art tools.
Advisors/Committee Members: Xue, Jingling, Computer Science & Engineering, Faculty of Engineering, UNSW.
Subjects/Keywords: C programs; LLVM Compiler architecture
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ye, D. (2015). Accelerating Dynamic Detection of Memory Errors for C Programs via Static Analysis. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/54507 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:35127/SOURCE02?view=true
Chicago Manual of Style (16th Edition):
Ye, Ding. “Accelerating Dynamic Detection of Memory Errors for C Programs via Static Analysis.” 2015. Doctoral Dissertation, University of New South Wales. Accessed April 18, 2021.
http://handle.unsw.edu.au/1959.4/54507 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:35127/SOURCE02?view=true.
MLA Handbook (7th Edition):
Ye, Ding. “Accelerating Dynamic Detection of Memory Errors for C Programs via Static Analysis.” 2015. Web. 18 Apr 2021.
Vancouver:
Ye D. Accelerating Dynamic Detection of Memory Errors for C Programs via Static Analysis. [Internet] [Doctoral dissertation]. University of New South Wales; 2015. [cited 2021 Apr 18].
Available from: http://handle.unsw.edu.au/1959.4/54507 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:35127/SOURCE02?view=true.
Council of Science Editors:
Ye D. Accelerating Dynamic Detection of Memory Errors for C Programs via Static Analysis. [Doctoral Dissertation]. University of New South Wales; 2015. Available from: http://handle.unsw.edu.au/1959.4/54507 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:35127/SOURCE02?view=true

NSYSU
22.
HSU, CHUN-TO.
Implementations and Automatic Synthesis of Programmable Logic Array (PLA) ROM.
Degree: Master, Computer Science and Engineering, 2014, NSYSU
URL: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0726114-101438
► Read-only memory (ROM) plays an important role In modern System-on-Chip (SoC) designs. Due to the regularity of ROM structure, ROM components are usually generated through…
(more)
▼ Read-only memory (ROM) plays an important role In modern System-on-Chip (SoC) designs. Due to the regularity of ROM structure, ROM components are usually generated through automatic ROM
compiler/generator. For example, ARM TSMC cell library provides ROM
compiler to automatic synthesize ROM of arbitrary size. In general, there are two different types of ROM implementations, conventional ROM structure or programmable logic array (PLA). For the implementation with the conventional ROM structure, we adopt the dynamic NAND-based circuit design with multi-level decoder to reduce the area cost. Regarding PLA-based implementation, the dynamic NOR-NOR structure is used where the logic optimization of product terms is performed using the Espresso logic minimization tool.We develop automatic ROM generators for both ROM structures and make comparison with those obtained from ARM ROM
compiler and those directly synthesized from combination logic using Synopsys RTL (Register Transfer Level) Design
Compiler. Based on the extensive comparisons of area, delay and power for ROM in various sizes, we make several interesting observations, and try to improve our ROM generators in order to make them more competitive for synthesis of large ROM size.
Advisors/Committee Members: Shiann-Rong Kuang (chair), Shen-Fu Hsiao. (committee member), Chuen-Yau Chen (chair), Jih-ching Chiu (chair), Ming-Chin Chen (chair).
Subjects/Keywords: RTL Compiler; ROM generator; Programmable Logic Array (PLA); ROM Compiler
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
HSU, C. (2014). Implementations and Automatic Synthesis of Programmable Logic Array (PLA) ROM. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0726114-101438
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
HSU, CHUN-TO. “Implementations and Automatic Synthesis of Programmable Logic Array (PLA) ROM.” 2014. Thesis, NSYSU. Accessed April 18, 2021.
http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0726114-101438.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
HSU, CHUN-TO. “Implementations and Automatic Synthesis of Programmable Logic Array (PLA) ROM.” 2014. Web. 18 Apr 2021.
Vancouver:
HSU C. Implementations and Automatic Synthesis of Programmable Logic Array (PLA) ROM. [Internet] [Thesis]. NSYSU; 2014. [cited 2021 Apr 18].
Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0726114-101438.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
HSU C. Implementations and Automatic Synthesis of Programmable Logic Array (PLA) ROM. [Thesis]. NSYSU; 2014. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0726114-101438
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Princeton University
23.
Paraskevopoulou, Zoe.
Verified Optimizations for Functional Languages
.
Degree: PhD, 2020, Princeton University
URL: http://arks.princeton.edu/ark:/88435/dsp01pr76f648c
► Coq is one of the most widely adopted proof development systems. Itallows programmers to write purely functional programs and verify them against specifications with machine-checked…
(more)
▼ Coq is one of the most widely adopted proof development systems. Itallows programmers to write purely functional programs and verify them
against specifications with machine-checked proofs. After
verification, one can use Coq's extraction plugin to obtain a program
(in OCaml, Haskell, or Scheme) that can be compiled and
executed. However, bugs in either the extraction function or the
compiler of the extraction language can render source-level
verification useless.
A verified
compiler is a
compiler whose output provably preserves thesemantics of the source language. CertiCoq is a verified
compiler,
currently under development, for Coq's specification language,
Gallina. CertiCoq targets Clight, a subset of the C language, that
can be compiled with the CompCert verified
compiler to obtain a
certified executable, bridging the gap between the formally verified
source program and the compiled target program.
In this thesis, I present the implementation and verification ofCertiCoq's optimizing middle-end pipeline. CertiCoq's middle end
consists of seven different transformations and is responsible for
efficiently compiling an untyped purely functional intermediate
language to a subset of the same language, which can be readily
compiled to a first-order, low-level intermediate language.
CertiCoq's middle-end pipeline performs crucial optimizations for
functional languages including closure conversion, uncurrying,
shrink-reduction and inlining. It advances the state of the art of
verified optimizing compilers for functional languages by implementing
more efficient closure-allocation strategies.
For proving CertiCoq correct, I develop a framework based on thetechnique of logical relations, making novel technical contributions.
I extend logical relations with notions of relational preconditions
and postconditions that facilitate reasoning about the resource
consumption of programs simultaneously with functional correctness.
I demonstrate how this enables reasoning about preservation of
non-terminating behaviors, which is not supported by traditional
logical relations. Moreover, I develop a novel, lightweight technique
that allows logical-relation proofs to be composed in order to obtain
a top-level compositional
compiler correctness theorem. This
technique is used to obtain a separate compilation theorem that
guarantees that programs compiled separately through CertiCoq using
different sets of optimizations can be safely linked at the target
level. Lastly, I use the framework to prove that CertiCoq's closure
conversion is not only functionally correct but also safe for time and
space, meaning that it is guaranteed to preserve the asymptotic time
and space complexity of the source program.
Advisors/Committee Members: Appel, Andrew W (advisor).
Subjects/Keywords: compiler correctness;
compositional compiler correctness;
Coq;
functional programming languagegs;
logical relations
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Paraskevopoulou, Z. (2020). Verified Optimizations for Functional Languages
. (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01pr76f648c
Chicago Manual of Style (16th Edition):
Paraskevopoulou, Zoe. “Verified Optimizations for Functional Languages
.” 2020. Doctoral Dissertation, Princeton University. Accessed April 18, 2021.
http://arks.princeton.edu/ark:/88435/dsp01pr76f648c.
MLA Handbook (7th Edition):
Paraskevopoulou, Zoe. “Verified Optimizations for Functional Languages
.” 2020. Web. 18 Apr 2021.
Vancouver:
Paraskevopoulou Z. Verified Optimizations for Functional Languages
. [Internet] [Doctoral dissertation]. Princeton University; 2020. [cited 2021 Apr 18].
Available from: http://arks.princeton.edu/ark:/88435/dsp01pr76f648c.
Council of Science Editors:
Paraskevopoulou Z. Verified Optimizations for Functional Languages
. [Doctoral Dissertation]. Princeton University; 2020. Available from: http://arks.princeton.edu/ark:/88435/dsp01pr76f648c

Brno University of Technology
24.
Horník, Jakub.
Zadní část překladače podmnožiny jazyka C pro 8-bitový procesor: Compiler Back-End of Subset of Language C for 8-Bit Processor.
Degree: 2019, Brno University of Technology
URL: http://hdl.handle.net/11012/54208
► A compiler allows us to describe an algorithm in a high-level programming language with a higher level of abstraction and readability than a low-level machine…
(more)
▼ A
compiler allows us to describe an algorithm in a high-level programming language with a higher level of abstraction and readability than a low-level machine code. This work describes design of the
compiler back-end of subset of language C for 8-bit soft-core microcontroller Xilinx PicoBlaze-3. Design is described from the initial selection of a suitable framework to the implementation itself. One of the main reasons of this work is that there is not any suitable
compiler for this processor.
Advisors/Committee Members: Křivka, Zbyněk (advisor), Koutný, Jiří (referee).
Subjects/Keywords: kompilátor; Low Level Virtual Machine Compiler; mezikód; překladač; PicoBlaze; PicoBlaze C Compiler; Small Device C Compiler; zadní část překladače; back-end; compiler; intermediate code; Low Level Virtual Machine Compiler; PicoBlaze; PicoBlaze C Compiler; Small Device C Compiler
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Horník, J. (2019). Zadní část překladače podmnožiny jazyka C pro 8-bitový procesor: Compiler Back-End of Subset of Language C for 8-Bit Processor. (Thesis). Brno University of Technology. Retrieved from http://hdl.handle.net/11012/54208
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Horník, Jakub. “Zadní část překladače podmnožiny jazyka C pro 8-bitový procesor: Compiler Back-End of Subset of Language C for 8-Bit Processor.” 2019. Thesis, Brno University of Technology. Accessed April 18, 2021.
http://hdl.handle.net/11012/54208.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Horník, Jakub. “Zadní část překladače podmnožiny jazyka C pro 8-bitový procesor: Compiler Back-End of Subset of Language C for 8-Bit Processor.” 2019. Web. 18 Apr 2021.
Vancouver:
Horník J. Zadní část překladače podmnožiny jazyka C pro 8-bitový procesor: Compiler Back-End of Subset of Language C for 8-Bit Processor. [Internet] [Thesis]. Brno University of Technology; 2019. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/11012/54208.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Horník J. Zadní část překladače podmnožiny jazyka C pro 8-bitový procesor: Compiler Back-End of Subset of Language C for 8-Bit Processor. [Thesis]. Brno University of Technology; 2019. Available from: http://hdl.handle.net/11012/54208
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Brno University of Technology
25.
Machata, Petr.
Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection: Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection.
Degree: 2020, Brno University of Technology
URL: http://hdl.handle.net/11012/187782
► This MSc Thesis was performed in English with the support of the ANF DATA s.r.o., Brno. The entry barrier to the development for GCC got…
(more)
▼ This MSc Thesis was performed in English with the support of the ANF DATA s.r.o., Brno. The entry barrier to the development for GCC got considerably lower during the last years. Articles with various architectural overviews and how-to documents pop up in magazines, websites, and on conferences. With official intermediate language, GENERIC, used for communication between front end and the rest of the
compiler, things are yet easier: It is no more necessary to bear the tedium of RTL when one writes new front end. Yet, there is a complexity inherent in handling a source base the size of GCC. There are files to be written, peculiar options to be set up, and these all with relatively thin documentation. This work is written to help with this last point. An example front end is described, with everything from the source base setup, through various GENERIC constructs, up to compilation of runtime library, or using GCC native preprocessor.
Advisors/Committee Members: Eysselt, Miloš (advisor), Masopust, Tomáš (referee).
Subjects/Keywords: GCC; GNU Compiler Collection; přední část; Algol 60; kompilátor; GCC; GNU Compiler Collection; frontend; front end; Algol 60; compiler
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Machata, P. (2020). Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection: Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection. (Thesis). Brno University of Technology. Retrieved from http://hdl.handle.net/11012/187782
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Machata, Petr. “Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection: Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection.” 2020. Thesis, Brno University of Technology. Accessed April 18, 2021.
http://hdl.handle.net/11012/187782.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Machata, Petr. “Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection: Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection.” 2020. Web. 18 Apr 2021.
Vancouver:
Machata P. Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection: Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection. [Internet] [Thesis]. Brno University of Technology; 2020. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/11012/187782.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Machata P. Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection: Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection. [Thesis]. Brno University of Technology; 2020. Available from: http://hdl.handle.net/11012/187782
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Brno University of Technology
26.
Machata, Petr.
Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection: Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection.
Degree: 2020, Brno University of Technology
URL: http://hdl.handle.net/11012/54013
► This MSc Thesis was performed in English with the support of the ANF DATA s.r.o., Brno. The entry barrier to the development for GCC got…
(more)
▼ This MSc Thesis was performed in English with the support of the ANF DATA s.r.o., Brno. The entry barrier to the development for GCC got considerably lower during the last years. Articles with various architectural overviews and how-to documents pop up in magazines, websites, and on conferences. With official intermediate language, GENERIC, used for communication between front end and the rest of the
compiler, things are yet easier: It is no more necessary to bear the tedium of RTL when one writes new front end. Yet, there is a complexity inherent in handling a source base the size of GCC. There are files to be written, peculiar options to be set up, and these all with relatively thin documentation. This work is written to help with this last point. An example front end is described, with everything from the source base setup, through various GENERIC constructs, up to compilation of runtime library, or using GCC native preprocessor.
Advisors/Committee Members: Eysselt, Miloš (advisor), Masopust, Tomáš (referee).
Subjects/Keywords: GCC; GNU Compiler Collection; frontend; front end; Algol 60; compiler; GCC; GNU Compiler Collection; přední část; Algol 60; kompilátor
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Machata, P. (2020). Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection: Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection. (Thesis). Brno University of Technology. Retrieved from http://hdl.handle.net/11012/54013
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Machata, Petr. “Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection: Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection.” 2020. Thesis, Brno University of Technology. Accessed April 18, 2021.
http://hdl.handle.net/11012/54013.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Machata, Petr. “Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection: Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection.” 2020. Web. 18 Apr 2021.
Vancouver:
Machata P. Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection: Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection. [Internet] [Thesis]. Brno University of Technology; 2020. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/11012/54013.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Machata P. Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection: Methodology of Construction Compiler Front-End and Its Integration into the GNU Compiler Collection. [Thesis]. Brno University of Technology; 2020. Available from: http://hdl.handle.net/11012/54013
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Utah
27.
Pagariya, Rohit.
Direct equivalence testing of embedded software.
Degree: MS, School of Computing, 2011, University of Utah
URL: http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/48/rec/740
► Direct equivalence testing is a framework for detecting errors in C compilers and application programs that exploits the fact that program semantics should be preserved…
(more)
▼ Direct equivalence testing is a framework for detecting errors in C compilers and application programs that exploits the fact that program semantics should be preserved during the compilation process. Binaries generated from the same piece of code should remain equivalent irrespective of the compiler, or compiler optimizations, used. Compiler errors as well as program errors such as out of bounds memory access, stack over ow, and use of uninitialized local variables cause nonequivalence in the generated binaries. Direct equivalence testing has detected previously unknown errors in real world embedded software like TinyOS and in di fferent compilers like msp430-gcc and llvm-msp430.
Subjects/Keywords: Compiler testing; Embedded software; Equivalence testing; Verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Pagariya, R. (2011). Direct equivalence testing of embedded software. (Masters Thesis). University of Utah. Retrieved from http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/48/rec/740
Chicago Manual of Style (16th Edition):
Pagariya, Rohit. “Direct equivalence testing of embedded software.” 2011. Masters Thesis, University of Utah. Accessed April 18, 2021.
http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/48/rec/740.
MLA Handbook (7th Edition):
Pagariya, Rohit. “Direct equivalence testing of embedded software.” 2011. Web. 18 Apr 2021.
Vancouver:
Pagariya R. Direct equivalence testing of embedded software. [Internet] [Masters thesis]. University of Utah; 2011. [cited 2021 Apr 18].
Available from: http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/48/rec/740.
Council of Science Editors:
Pagariya R. Direct equivalence testing of embedded software. [Masters Thesis]. University of Utah; 2011. Available from: http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/48/rec/740

Cornell University
28.
Deng, Yawen.
Scalable Compiler for TERMES Distributed Assembly System.
Degree: M.S., Mechanical Engineering, Mechanical Engineering, 2018, Cornell University
URL: http://hdl.handle.net/1813/59408
► The TERMES system is a robot collective capable of autonomously constructing user-specified structures in three dimensions. The compiler is one of the key components that…
(more)
▼ The TERMES system is a robot collective capable of autonomously constructing user-specified structures in three dimensions. The
compiler is one of the key components that convert the goal structure into a directed map which an arbitrary number of robots can follow to perform decentralized construction. In previous work, the
compiler was limited to brute force search which scales poorly with the size of structure. The purpose of this research is to enhance the scalability of the
compiler so that it can be applied to very large-scale structures. Correspondingly, a new scalable
compiler is presented, with the ability to generate directed maps for structures with up to 1 million stack of bricks. We further recast the old
compiler as a constraint satisfaction problem and compare their performance on a range of structures. Results show that the new
compiler has significant advantages over the old
compiler as the size and complexity of the structures increase. We further developed an automated scheme for improving the transition probability between neighboring stack of bricks for efficient construction. This work represents an important step towards real-world deployment of robot collectives for construction.
Advisors/Committee Members: Petersen, Kirstin Hagelskjaer (chair), Knepper, Ross A. (committee member).
Subjects/Keywords: Robotics; Collective Construction; Mechanical engineering; Scalable Compiler
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Deng, Y. (2018). Scalable Compiler for TERMES Distributed Assembly System. (Masters Thesis). Cornell University. Retrieved from http://hdl.handle.net/1813/59408
Chicago Manual of Style (16th Edition):
Deng, Yawen. “Scalable Compiler for TERMES Distributed Assembly System.” 2018. Masters Thesis, Cornell University. Accessed April 18, 2021.
http://hdl.handle.net/1813/59408.
MLA Handbook (7th Edition):
Deng, Yawen. “Scalable Compiler for TERMES Distributed Assembly System.” 2018. Web. 18 Apr 2021.
Vancouver:
Deng Y. Scalable Compiler for TERMES Distributed Assembly System. [Internet] [Masters thesis]. Cornell University; 2018. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1813/59408.
Council of Science Editors:
Deng Y. Scalable Compiler for TERMES Distributed Assembly System. [Masters Thesis]. Cornell University; 2018. Available from: http://hdl.handle.net/1813/59408
29.
Peckner, Justin E.
XML-based form creation.
Degree: MS, Computer Science, 2013, California State University – Northridge
URL: http://hdl.handle.net/10211.2/3188
► While web-based forms have become essential for any organization collecting and processing large amounts of data, CSUN currently has no central electronic form management system.…
(more)
▼ While web-based forms have become essential for any organization collecting and processing
large amounts of data, CSUN currently has no central electronic form management
system. This project is intended as the foundation of a future comprehensive electronic
form system for CSUN. By inputting a block of XML markup consisting of easy-to-learn
elements and attributes, users can instantly generate a usable HTML form containing labels,
pictures, text boxes, buttons, and more. If the XML is malformed in one or more
places, the system lists a specific location and message for each error.
To parse the XML, a customized XML
compiler was created. The
compiler first uses
a deterministic finite automaton (DFA) to perform lexical analysis, character-by-character.
This process breaks the original markup into separate tokens such as <label>, Hello
World!, and </label>. Next, a pushdown automaton (PDA) performs syntax analysis
to determine if the sequence of tokens entered is valid. If so, the sequence is converted
to Java object instances, and any attribute values not entered by the user are intelligently
guessed by the system. Finally, the instances are converted to HTML, which is displayed
in the user's browser.
Given that this is the first of several components comprising the future form management
system, care has been taken to make this project's code as open-ended and straightforward
as possible. Additionally, while the XML
compiler is used in this project to generate
forms, it is completely independent of the form creation code. As such, programmers could
easily interface with it for other XML-based purposes in the future.
Advisors/Committee Members: Wiegley, Jeffrey (advisor), Covington, Richard G. (committee member).
Subjects/Keywords: Compiler; Dissertations, Academic – CSUN – Computer Science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Peckner, J. E. (2013). XML-based form creation. (Masters Thesis). California State University – Northridge. Retrieved from http://hdl.handle.net/10211.2/3188
Chicago Manual of Style (16th Edition):
Peckner, Justin E. “XML-based form creation.” 2013. Masters Thesis, California State University – Northridge. Accessed April 18, 2021.
http://hdl.handle.net/10211.2/3188.
MLA Handbook (7th Edition):
Peckner, Justin E. “XML-based form creation.” 2013. Web. 18 Apr 2021.
Vancouver:
Peckner JE. XML-based form creation. [Internet] [Masters thesis]. California State University – Northridge; 2013. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/10211.2/3188.
Council of Science Editors:
Peckner JE. XML-based form creation. [Masters Thesis]. California State University – Northridge; 2013. Available from: http://hdl.handle.net/10211.2/3188

Penn State University
30.
Ding, Wei.
A Fresh Look At Data Locality On Emerging Multicores And Manycores.
Degree: 2014, Penn State University
URL: https://submit-etda.libraries.psu.edu/catalog/22506
► The emergence of multicore platforms offers several opportunities for boosting ap- plication performance. These opportunities, which include parallelism and data locality benefits, require strong support…
(more)
▼ The emergence of multicore platforms offers several opportunities for boosting ap- plication performance. These opportunities, which include parallelism and data locality benefits, require strong support from compilers as well as operating sys- tems. However, architectural abstractions relevant to memory system are scarce in current programming and
compiler systems. In fact, most compilers do not take any memory system specific parameter into account even when they are perform- ing data locality optimizations. Instead, their locality optimizations are driven by rule-of-thumbs such as “maximizing stride-1 accesses in innermost loop positions”. There are a few compilers that take cache and memory specific parameters into account to look at the data locality problem in a global sense.
One of these parameters is the on-chip cache hierarchy, which determines the core connection and thus data sharing between computations on different cores. Another parameter is the memory controller. In a network-on-chip (NoC) based multicore architecture, an off-chip data access (main memory access) needs to travel through the on-chip network, spending considerable amount of time within the chip (in addition to the memory access latency). In addition, it contends with on-chip (cache) accesses as both use the same NoC resources. The third parameter that will be discussed in this thesis is the row-buffer. Many emerging multicores employ banked memory systems and each bank is attached a row-buffer that holds the most-recently accessed memory row (page). A last-level cache miss that also misses in the row-buffer can experience much higher latency than a cache miss that hits in the row-buffer. Consequently, optimizing for row-buffer locality can be as important as optimizing for cache locality.
Motivated by this, in this thesis, we propose four different
compiler-directed “ locality” optimization schemes that take these parameters into account. Specifi- cally, our first scheme targets cache hierarchy-aware loop transformation strategy for multicore architectures. It determines a loop iteration-to-core mapping by
iii
taking into account application data access pattern and multicore on-chip cache hierarchy. It employs “core vectors” to exploit data reuses at different layers of cache hierarchy based on their reuse distances, with the goal of maximizing data lo- cality at each level while minimizing the data dependences across the cores. In case of dependence free loop nest, we customize our loop scheduling strategy, which, on the other hand, determines a schedule for the iterations assigned to each core, with the goal of reducing data reuse distances across the cores. Our experimental evaluation shows that the proposed loop transformation scheme reduces miss rates at all levels of caches and application execution time significantly, and when sup- ported by scheduling, the reduction in cache miss rates and execution time become much larger.
The second scheme explores automatic data layout transformation targeting multithreaded applications…
Advisors/Committee Members: Mahmut Taylan Kandemir, Dissertation Advisor/Co-Advisor, Mary Jane Irwin, Committee Member, Padma Raghavan, Committee Member, Dinghao Wu, Committee Member.
Subjects/Keywords: Data Locality; Multicore; Manycore; Compiler; Loop
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ding, W. (2014). A Fresh Look At Data Locality On Emerging Multicores And Manycores. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/22506
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ding, Wei. “A Fresh Look At Data Locality On Emerging Multicores And Manycores.” 2014. Thesis, Penn State University. Accessed April 18, 2021.
https://submit-etda.libraries.psu.edu/catalog/22506.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ding, Wei. “A Fresh Look At Data Locality On Emerging Multicores And Manycores.” 2014. Web. 18 Apr 2021.
Vancouver:
Ding W. A Fresh Look At Data Locality On Emerging Multicores And Manycores. [Internet] [Thesis]. Penn State University; 2014. [cited 2021 Apr 18].
Available from: https://submit-etda.libraries.psu.edu/catalog/22506.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ding W. A Fresh Look At Data Locality On Emerging Multicores And Manycores. [Thesis]. Penn State University; 2014. Available from: https://submit-etda.libraries.psu.edu/catalog/22506
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
◁ [1] [2] [3] [4] [5] … [19] ▶
.