Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for +publisher:"Georgia Tech" +contributor:("Pande, Santosh"). Showing records 1 – 30 of 54 total matches.

[1] [2]

Search Limiters

Last 2 Years | English Only

▼ Search Limiters


Georgia Tech

1. Gupta, Meghana. Code generation and adaptive control divergence management for light weight SIMT processors.

Degree: MS, Computer Science, 2016, Georgia Tech

 The energy costs of data movement are limiting the performance scaling of future generations of high performance computing architectures targeted to data intensive applications. The… (more)

Subjects/Keywords: Compiler; SIMT; Control divergence

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gupta, M. (2016). Code generation and adaptive control divergence management for light weight SIMT processors. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/55044

Chicago Manual of Style (16th Edition):

Gupta, Meghana. “Code generation and adaptive control divergence management for light weight SIMT processors.” 2016. Masters Thesis, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/55044.

MLA Handbook (7th Edition):

Gupta, Meghana. “Code generation and adaptive control divergence management for light weight SIMT processors.” 2016. Web. 13 Apr 2021.

Vancouver:

Gupta M. Code generation and adaptive control divergence management for light weight SIMT processors. [Internet] [Masters thesis]. Georgia Tech; 2016. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/55044.

Council of Science Editors:

Gupta M. Code generation and adaptive control divergence management for light weight SIMT processors. [Masters Thesis]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/55044


Georgia Tech

2. Na, Taesik. Energy efficient, secure and noise robust deep learning for the internet of things.

Degree: PhD, Electrical and Computer Engineering, 2018, Georgia Tech

 The objective of this research is to design an energy efficient, secure and noise robust deep learning system for the Internet of Things (IoTs). The… (more)

Subjects/Keywords: Deep learning; Adversarial machine learning; Energy efficient training; Noise robust machine learning; IoT

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Na, T. (2018). Energy efficient, secure and noise robust deep learning for the internet of things. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/60293

Chicago Manual of Style (16th Edition):

Na, Taesik. “Energy efficient, secure and noise robust deep learning for the internet of things.” 2018. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/60293.

MLA Handbook (7th Edition):

Na, Taesik. “Energy efficient, secure and noise robust deep learning for the internet of things.” 2018. Web. 13 Apr 2021.

Vancouver:

Na T. Energy efficient, secure and noise robust deep learning for the internet of things. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/60293.

Council of Science Editors:

Na T. Energy efficient, secure and noise robust deep learning for the internet of things. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/60293


Georgia Tech

3. Kulkarni, Sulekha Raghavendra. Accelerating program analyses by cross-program training.

Degree: MS, Computer Science, 2016, Georgia Tech

 Practical programs share large modules of code. However, many program analyses are ineffective at reusing analysis results for shared code across programs. We present POLYMER,… (more)

Subjects/Keywords: Program analysis; Optimization; Datalog

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kulkarni, S. R. (2016). Accelerating program analyses by cross-program training. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/56359

Chicago Manual of Style (16th Edition):

Kulkarni, Sulekha Raghavendra. “Accelerating program analyses by cross-program training.” 2016. Masters Thesis, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/56359.

MLA Handbook (7th Edition):

Kulkarni, Sulekha Raghavendra. “Accelerating program analyses by cross-program training.” 2016. Web. 13 Apr 2021.

Vancouver:

Kulkarni SR. Accelerating program analyses by cross-program training. [Internet] [Masters thesis]. Georgia Tech; 2016. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/56359.

Council of Science Editors:

Kulkarni SR. Accelerating program analyses by cross-program training. [Masters Thesis]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/56359


Georgia Tech

4. Sahin, Semih. Memory optimizations for distributed executors in big data clouds.

Degree: PhD, Computer Science, 2019, Georgia Tech

 The amount of data generated from software and hardware sensors continues to grow exponentially as the world become more instrumented and interconnected. Our ability to… (more)

Subjects/Keywords: Memory management; Cloud computing; Big data processing; Spark

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Sahin, S. (2019). Memory optimizations for distributed executors in big data clouds. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62669

Chicago Manual of Style (16th Edition):

Sahin, Semih. “Memory optimizations for distributed executors in big data clouds.” 2019. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/62669.

MLA Handbook (7th Edition):

Sahin, Semih. “Memory optimizations for distributed executors in big data clouds.” 2019. Web. 13 Apr 2021.

Vancouver:

Sahin S. Memory optimizations for distributed executors in big data clouds. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/62669.

Council of Science Editors:

Sahin S. Memory optimizations for distributed executors in big data clouds. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62669


Georgia Tech

5. Park, Sunjae Young. Bridging the gap for hardware transactional memory.

Degree: PhD, Computer Science, 2018, Georgia Tech

 Transactional memory (TM) is a promising new tool for shared memory application development. Unlike mutual exclusion locks, TM allows atomic sections to execute concurrently, optimistically… (more)

Subjects/Keywords: Transactional memory; Multithread; Multicore

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Park, S. Y. (2018). Bridging the gap for hardware transactional memory. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62218

Chicago Manual of Style (16th Edition):

Park, Sunjae Young. “Bridging the gap for hardware transactional memory.” 2018. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/62218.

MLA Handbook (7th Edition):

Park, Sunjae Young. “Bridging the gap for hardware transactional memory.” 2018. Web. 13 Apr 2021.

Vancouver:

Park SY. Bridging the gap for hardware transactional memory. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/62218.

Council of Science Editors:

Park SY. Bridging the gap for hardware transactional memory. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/62218


Georgia Tech

6. Kang, Suk Chan. Optimizing high locality memory references in cache coherent shared memory multi-core processors.

Degree: PhD, Electrical and Computer Engineering, 2019, Georgia Tech

 Optimizing memory references has been a primary research area of computer systems ever since the advent of the stored program computers. The objective of this… (more)

Subjects/Keywords: Shared memory system; Cache coherence; Memory consistency; Synchronization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kang, S. C. (2019). Optimizing high locality memory references in cache coherent shared memory multi-core processors. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62641

Chicago Manual of Style (16th Edition):

Kang, Suk Chan. “Optimizing high locality memory references in cache coherent shared memory multi-core processors.” 2019. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/62641.

MLA Handbook (7th Edition):

Kang, Suk Chan. “Optimizing high locality memory references in cache coherent shared memory multi-core processors.” 2019. Web. 13 Apr 2021.

Vancouver:

Kang SC. Optimizing high locality memory references in cache coherent shared memory multi-core processors. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/62641.

Council of Science Editors:

Kang SC. Optimizing high locality memory references in cache coherent shared memory multi-core processors. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62641


Georgia Tech

7. Ravichandran, Kaushik. Programming frameworks for performance driven speculative parallelization.

Degree: PhD, Computer Science, 2014, Georgia Tech

 Effectively utilizing available parallelism is becoming harder and harder as systems evolve to many-core processors with many tens of cores per chip. Automatically extracting parallelism… (more)

Subjects/Keywords: Speculation; Transactional memory

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ravichandran, K. (2014). Programming frameworks for performance driven speculative parallelization. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/52985

Chicago Manual of Style (16th Edition):

Ravichandran, Kaushik. “Programming frameworks for performance driven speculative parallelization.” 2014. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/52985.

MLA Handbook (7th Edition):

Ravichandran, Kaushik. “Programming frameworks for performance driven speculative parallelization.” 2014. Web. 13 Apr 2021.

Vancouver:

Ravichandran K. Programming frameworks for performance driven speculative parallelization. [Internet] [Doctoral dissertation]. Georgia Tech; 2014. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/52985.

Council of Science Editors:

Ravichandran K. Programming frameworks for performance driven speculative parallelization. [Doctoral Dissertation]. Georgia Tech; 2014. Available from: http://hdl.handle.net/1853/52985


Georgia Tech

8. Kumar, Tushar. Characterizing and controlling program behavior using execution-time variance.

Degree: PhD, Electrical and Computer Engineering, 2016, Georgia Tech

 Immersive applications, such as computer gaming, computer vision and video codecs, are an important emerging class of applications with QoS requirements that are difficult to… (more)

Subjects/Keywords: Profiling; QoS tuning; Adaptive control; Optimal control; Gain scheduling; LQR; Machine learning; System identification; Parameter estimation; Online training; Multimedia; Video; Gaming; Computer vision; Statistical analysis; Best effort; Probabilistic; Program analysis; Linear fit; Dynamic tuning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kumar, T. (2016). Characterizing and controlling program behavior using execution-time variance. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/55000

Chicago Manual of Style (16th Edition):

Kumar, Tushar. “Characterizing and controlling program behavior using execution-time variance.” 2016. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/55000.

MLA Handbook (7th Edition):

Kumar, Tushar. “Characterizing and controlling program behavior using execution-time variance.” 2016. Web. 13 Apr 2021.

Vancouver:

Kumar T. Characterizing and controlling program behavior using execution-time variance. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/55000.

Council of Science Editors:

Kumar T. Characterizing and controlling program behavior using execution-time variance. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/55000


Georgia Tech

9. Lee, Sangho. Mitigating the performance impact of memory bloat.

Degree: PhD, Computer Science, 2015, Georgia Tech

 Memory bloat is loosely defined as an excessive memory usage by an application during its execution. Due to the complexity of efficient memory management that… (more)

Subjects/Keywords: Memory bloat; optimization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lee, S. (2015). Mitigating the performance impact of memory bloat. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/56174

Chicago Manual of Style (16th Edition):

Lee, Sangho. “Mitigating the performance impact of memory bloat.” 2015. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/56174.

MLA Handbook (7th Edition):

Lee, Sangho. “Mitigating the performance impact of memory bloat.” 2015. Web. 13 Apr 2021.

Vancouver:

Lee S. Mitigating the performance impact of memory bloat. [Internet] [Doctoral dissertation]. Georgia Tech; 2015. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/56174.

Council of Science Editors:

Lee S. Mitigating the performance impact of memory bloat. [Doctoral Dissertation]. Georgia Tech; 2015. Available from: http://hdl.handle.net/1853/56174


Georgia Tech

10. Hassan, Syed Minhaj. Exploiting on-chip memory concurrency in 3d manycore architectures.

Degree: PhD, Electrical and Computer Engineering, 2016, Georgia Tech

 The objective of this thesis is to optimize the uncore of 3D many-core architectures. More specifically, we note that technology trends point to large increases… (more)

Subjects/Keywords: 3D memory systems; Network-on-chip; 3D system thermal analysis; Memory-level parallelism; DRAM

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hassan, S. M. (2016). Exploiting on-chip memory concurrency in 3d manycore architectures. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/56251

Chicago Manual of Style (16th Edition):

Hassan, Syed Minhaj. “Exploiting on-chip memory concurrency in 3d manycore architectures.” 2016. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/56251.

MLA Handbook (7th Edition):

Hassan, Syed Minhaj. “Exploiting on-chip memory concurrency in 3d manycore architectures.” 2016. Web. 13 Apr 2021.

Vancouver:

Hassan SM. Exploiting on-chip memory concurrency in 3d manycore architectures. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/56251.

Council of Science Editors:

Hassan SM. Exploiting on-chip memory concurrency in 3d manycore architectures. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/56251


Georgia Tech

11. Wang, Jin. Acceleration and optimization of dynamic parallelism for irregular applications on GPUs.

Degree: PhD, Electrical and Computer Engineering, 2016, Georgia Tech

 The objective of this thesis is the development, implementation and optimization of a GPU execution model extension that efficiently supports time-varying, nested, fine-grained dynamic parallelism… (more)

Subjects/Keywords: General-purpose GPU; Dynamic parallelism; Irregular applications

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wang, J. (2016). Acceleration and optimization of dynamic parallelism for irregular applications on GPUs. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/56294

Chicago Manual of Style (16th Edition):

Wang, Jin. “Acceleration and optimization of dynamic parallelism for irregular applications on GPUs.” 2016. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/56294.

MLA Handbook (7th Edition):

Wang, Jin. “Acceleration and optimization of dynamic parallelism for irregular applications on GPUs.” 2016. Web. 13 Apr 2021.

Vancouver:

Wang J. Acceleration and optimization of dynamic parallelism for irregular applications on GPUs. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/56294.

Council of Science Editors:

Wang J. Acceleration and optimization of dynamic parallelism for irregular applications on GPUs. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/56294


Georgia Tech

12. Mandal, Ankush. Enabling parallelism and optimizations in data mining algorithms for power-law data.

Degree: PhD, Computer Science, 2020, Georgia Tech

 Today's data mining tasks aim to extract meaningful information from a large amount of data in a reasonable time mainly via means of  – a)… (more)

Subjects/Keywords: Data mining; Performance optimization; Parallel approximate algorithms; Power-law data; Sketches; Word embedding; Multi-core; GPU

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mandal, A. (2020). Enabling parallelism and optimizations in data mining algorithms for power-law data. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/63692

Chicago Manual of Style (16th Edition):

Mandal, Ankush. “Enabling parallelism and optimizations in data mining algorithms for power-law data.” 2020. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/63692.

MLA Handbook (7th Edition):

Mandal, Ankush. “Enabling parallelism and optimizations in data mining algorithms for power-law data.” 2020. Web. 13 Apr 2021.

Vancouver:

Mandal A. Enabling parallelism and optimizations in data mining algorithms for power-law data. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/63692.

Council of Science Editors:

Mandal A. Enabling parallelism and optimizations in data mining algorithms for power-law data. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/63692


Georgia Tech

13. Mururu, Girish. Compiler Guided Scheduling : A Cross-Stack Approach For Performance Elicitation.

Degree: PhD, Computer Science, 2020, Georgia Tech

 Modern software executes on multi-core systems that share resources like several levels of memory hierarchy (caches, main memory, secondary storage), I/O devices, and network interfaces.… (more)

Subjects/Keywords: Compilers; Scheduling; Co-location

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mururu, G. (2020). Compiler Guided Scheduling : A Cross-Stack Approach For Performance Elicitation. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/64096

Chicago Manual of Style (16th Edition):

Mururu, Girish. “Compiler Guided Scheduling : A Cross-Stack Approach For Performance Elicitation.” 2020. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/64096.

MLA Handbook (7th Edition):

Mururu, Girish. “Compiler Guided Scheduling : A Cross-Stack Approach For Performance Elicitation.” 2020. Web. 13 Apr 2021.

Vancouver:

Mururu G. Compiler Guided Scheduling : A Cross-Stack Approach For Performance Elicitation. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/64096.

Council of Science Editors:

Mururu G. Compiler Guided Scheduling : A Cross-Stack Approach For Performance Elicitation. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/64096


Georgia Tech

14. Chatarasi, Prasanth. ADVANCING COMPILER OPTIMIZATIONS FOR GENERAL-PURPOSE & DOMAIN-SPECIFIC PARALLEL ARCHITECTURES.

Degree: PhD, Computer Science, 2020, Georgia Tech

 Computer hardware is undergoing a major disruption as we approach the end of Moore’s law, in the form of new advancements to general-purpose and domain-specific… (more)

Subjects/Keywords: Compiler Optimizations; General-Purpose Architectures; Domain-Specific Architectures; Deep Learning; Graph Analytics; Accelerators; Polyhedral Model

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chatarasi, P. (2020). ADVANCING COMPILER OPTIMIZATIONS FOR GENERAL-PURPOSE & DOMAIN-SPECIFIC PARALLEL ARCHITECTURES. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/64099

Chicago Manual of Style (16th Edition):

Chatarasi, Prasanth. “ADVANCING COMPILER OPTIMIZATIONS FOR GENERAL-PURPOSE & DOMAIN-SPECIFIC PARALLEL ARCHITECTURES.” 2020. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/64099.

MLA Handbook (7th Edition):

Chatarasi, Prasanth. “ADVANCING COMPILER OPTIMIZATIONS FOR GENERAL-PURPOSE & DOMAIN-SPECIFIC PARALLEL ARCHITECTURES.” 2020. Web. 13 Apr 2021.

Vancouver:

Chatarasi P. ADVANCING COMPILER OPTIMIZATIONS FOR GENERAL-PURPOSE & DOMAIN-SPECIFIC PARALLEL ARCHITECTURES. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/64099.

Council of Science Editors:

Chatarasi P. ADVANCING COMPILER OPTIMIZATIONS FOR GENERAL-PURPOSE & DOMAIN-SPECIFIC PARALLEL ARCHITECTURES. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/64099


Georgia Tech

15. Chen, Chao. Compiler-Assisted Resilience Framework for Recovery from Transient Faults.

Degree: PhD, Computer Science, 2020, Georgia Tech

 Due to system scaling trends toward smaller transistor size, higher circuit density and the use of near-threshold voltage (NTV) techniques, transient hardware faults introduced by… (more)

Subjects/Keywords: Resilience; Compiler; HPC; High Performance Computing; Fault Tolerance; SDC; Silent Data Corruption; Transient Hardware Fault; Transient Fault; Soft Failure; Crash; Failure

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chen, C. (2020). Compiler-Assisted Resilience Framework for Recovery from Transient Faults. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/64214

Chicago Manual of Style (16th Edition):

Chen, Chao. “Compiler-Assisted Resilience Framework for Recovery from Transient Faults.” 2020. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/64214.

MLA Handbook (7th Edition):

Chen, Chao. “Compiler-Assisted Resilience Framework for Recovery from Transient Faults.” 2020. Web. 13 Apr 2021.

Vancouver:

Chen C. Compiler-Assisted Resilience Framework for Recovery from Transient Faults. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/64214.

Council of Science Editors:

Chen C. Compiler-Assisted Resilience Framework for Recovery from Transient Faults. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/64214

16. Marquez, Nicholas Alexander. OOCFA2: a PDA-based higher-order flow analysis for object-oriented programs.

Degree: MS, Computer Science, 2013, Georgia Tech

 The application of higher-order PDA-based flow analyses to object-oriented languages enables comprehensive and precise characterization of program behavior, while retaining practicality with efficiency. We implement… (more)

Subjects/Keywords: CFA2; kCFA; Dalvik; Java; JVM; Securty analysis; Static analysis; Object-oriented programming (Computer science); Object-oriented programming languages; Operating systems (Computers); Data protection

Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Marquez, N. A. (2013). OOCFA2: a PDA-based higher-order flow analysis for object-oriented programs. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/47556

Chicago Manual of Style (16th Edition):

Marquez, Nicholas Alexander. “OOCFA2: a PDA-based higher-order flow analysis for object-oriented programs.” 2013. Masters Thesis, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/47556.

MLA Handbook (7th Edition):

Marquez, Nicholas Alexander. “OOCFA2: a PDA-based higher-order flow analysis for object-oriented programs.” 2013. Web. 13 Apr 2021.

Vancouver:

Marquez NA. OOCFA2: a PDA-based higher-order flow analysis for object-oriented programs. [Internet] [Masters thesis]. Georgia Tech; 2013. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/47556.

Council of Science Editors:

Marquez NA. OOCFA2: a PDA-based higher-order flow analysis for object-oriented programs. [Masters Thesis]. Georgia Tech; 2013. Available from: http://hdl.handle.net/1853/47556

17. Bhagwat, Ashwini. Methodologies and tools for computation offloading on heterogeneous multicores.

Degree: MS, Computing, 2009, Georgia Tech

 Frequency scaling in traditional computing systems has hit the power wall and multicore computing is here to stay. Unlike homogeneous multicores which have uniform architecture… (more)

Subjects/Keywords: Parallel Computing; Coprocessors; Coding theory

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bhagwat, A. (2009). Methodologies and tools for computation offloading on heterogeneous multicores. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/29688

Chicago Manual of Style (16th Edition):

Bhagwat, Ashwini. “Methodologies and tools for computation offloading on heterogeneous multicores.” 2009. Masters Thesis, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/29688.

MLA Handbook (7th Edition):

Bhagwat, Ashwini. “Methodologies and tools for computation offloading on heterogeneous multicores.” 2009. Web. 13 Apr 2021.

Vancouver:

Bhagwat A. Methodologies and tools for computation offloading on heterogeneous multicores. [Internet] [Masters thesis]. Georgia Tech; 2009. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/29688.

Council of Science Editors:

Bhagwat A. Methodologies and tools for computation offloading on heterogeneous multicores. [Masters Thesis]. Georgia Tech; 2009. Available from: http://hdl.handle.net/1853/29688

18. Ozarde, Sarang Anil. Performance understanding and tuning of iterative computation using profiling techniques.

Degree: MS, Computing, 2010, Georgia Tech

 Most applications spend a significant amount of time in the iterative parts of a computation. They typically iterate over the same set of operations with… (more)

Subjects/Keywords: Performance debugging; Performance analysis; Combinatorial optimization; Algorithms; Heuristic algorithms

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ozarde, S. A. (2010). Performance understanding and tuning of iterative computation using profiling techniques. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/34757

Chicago Manual of Style (16th Edition):

Ozarde, Sarang Anil. “Performance understanding and tuning of iterative computation using profiling techniques.” 2010. Masters Thesis, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/34757.

MLA Handbook (7th Edition):

Ozarde, Sarang Anil. “Performance understanding and tuning of iterative computation using profiling techniques.” 2010. Web. 13 Apr 2021.

Vancouver:

Ozarde SA. Performance understanding and tuning of iterative computation using profiling techniques. [Internet] [Masters thesis]. Georgia Tech; 2010. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/34757.

Council of Science Editors:

Ozarde SA. Performance understanding and tuning of iterative computation using profiling techniques. [Masters Thesis]. Georgia Tech; 2010. Available from: http://hdl.handle.net/1853/34757

19. Kerr, Andrew. A model of dynamic compilation for heterogeneous compute platforms.

Degree: PhD, Electrical and Computer Engineering, 2012, Georgia Tech

 Trends in computer engineering place renewed emphasis on increasing parallelism and heterogeneity. The rise of parallelism adds an additional dimension to the challenge of portability,… (more)

Subjects/Keywords: Dynamic compilation; GPU computing; Cuda; Opencl; SIMD; Vector; Multicore; Parallel computing; Parallel computers; Parallel programs (Computer programs); Heterogeneous computing; Parallel processing (Electronic computers); High performance computing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kerr, A. (2012). A model of dynamic compilation for heterogeneous compute platforms. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/47719

Chicago Manual of Style (16th Edition):

Kerr, Andrew. “A model of dynamic compilation for heterogeneous compute platforms.” 2012. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/47719.

MLA Handbook (7th Edition):

Kerr, Andrew. “A model of dynamic compilation for heterogeneous compute platforms.” 2012. Web. 13 Apr 2021.

Vancouver:

Kerr A. A model of dynamic compilation for heterogeneous compute platforms. [Internet] [Doctoral dissertation]. Georgia Tech; 2012. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/47719.

Council of Science Editors:

Kerr A. A model of dynamic compilation for heterogeneous compute platforms. [Doctoral Dissertation]. Georgia Tech; 2012. Available from: http://hdl.handle.net/1853/47719

20. Hong, Kirak. A distributed framework for situation awareness on camera networks.

Degree: PhD, Computer Science, 2014, Georgia Tech

 With the proliferation of cameras and advanced video analytics, situation awareness applications that automatically generate actionable knowledge from live camera streams has become an important… (more)

Subjects/Keywords: Distributed framework; Situation awareness; Camera networks; Stream processing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hong, K. (2014). A distributed framework for situation awareness on camera networks. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/52263

Chicago Manual of Style (16th Edition):

Hong, Kirak. “A distributed framework for situation awareness on camera networks.” 2014. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/52263.

MLA Handbook (7th Edition):

Hong, Kirak. “A distributed framework for situation awareness on camera networks.” 2014. Web. 13 Apr 2021.

Vancouver:

Hong K. A distributed framework for situation awareness on camera networks. [Internet] [Doctoral dissertation]. Georgia Tech; 2014. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/52263.

Council of Science Editors:

Hong K. A distributed framework for situation awareness on camera networks. [Doctoral Dissertation]. Georgia Tech; 2014. Available from: http://hdl.handle.net/1853/52263

21. Cledat, Romain. Programming models for speculative and optimistic parallelism based on algorithmic properties.

Degree: PhD, Computing, 2011, Georgia Tech

 Today's hardware is becoming more and more parallel. While embarrassingly parallel codes, such as high-performance computing ones, can readily take advantage of this increased number… (more)

Subjects/Keywords: Parallel programming; Algorithmic properties; Algorithmic diversity; Variable semantics; Data-structure semantics; N-way model; Algorithms; Parallel computers; Computer science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cledat, R. (2011). Programming models for speculative and optimistic parallelism based on algorithmic properties. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/42749

Chicago Manual of Style (16th Edition):

Cledat, Romain. “Programming models for speculative and optimistic parallelism based on algorithmic properties.” 2011. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/42749.

MLA Handbook (7th Edition):

Cledat, Romain. “Programming models for speculative and optimistic parallelism based on algorithmic properties.” 2011. Web. 13 Apr 2021.

Vancouver:

Cledat R. Programming models for speculative and optimistic parallelism based on algorithmic properties. [Internet] [Doctoral dissertation]. Georgia Tech; 2011. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/42749.

Council of Science Editors:

Cledat R. Programming models for speculative and optimistic parallelism based on algorithmic properties. [Doctoral Dissertation]. Georgia Tech; 2011. Available from: http://hdl.handle.net/1853/42749

22. Lillethun, David. ssIoTa: A system software framework for the internet of things.

Degree: PhD, Computer Science, 2015, Georgia Tech

 Sensors are widely deployed in our environment, and their number is increasing rapidly. In the near future, billions of devices will all be connected to… (more)

Subjects/Keywords: Complex event processing; Situation awareness; Cyberphysical systems; Live streaming analysis; Internet of things; Distributed systems; Distributed scheduling; Fog computing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lillethun, D. (2015). ssIoTa: A system software framework for the internet of things. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/53531

Chicago Manual of Style (16th Edition):

Lillethun, David. “ssIoTa: A system software framework for the internet of things.” 2015. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/53531.

MLA Handbook (7th Edition):

Lillethun, David. “ssIoTa: A system software framework for the internet of things.” 2015. Web. 13 Apr 2021.

Vancouver:

Lillethun D. ssIoTa: A system software framework for the internet of things. [Internet] [Doctoral dissertation]. Georgia Tech; 2015. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/53531.

Council of Science Editors:

Lillethun D. ssIoTa: A system software framework for the internet of things. [Doctoral Dissertation]. Georgia Tech; 2015. Available from: http://hdl.handle.net/1853/53531

23. Wu, Haicheng. Acceleration and execution of relational queries using general purpose graphics processing unit (GPGPU).

Degree: PhD, Electrical and Computer Engineering, 2015, Georgia Tech

 This thesis first maps the relational computation onto Graphics Processing Units (GPU)s by designing a series of tools and then explores the different opportunities of… (more)

Subjects/Keywords: Database; GPU

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wu, H. (2015). Acceleration and execution of relational queries using general purpose graphics processing unit (GPGPU). (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/54405

Chicago Manual of Style (16th Edition):

Wu, Haicheng. “Acceleration and execution of relational queries using general purpose graphics processing unit (GPGPU).” 2015. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/54405.

MLA Handbook (7th Edition):

Wu, Haicheng. “Acceleration and execution of relational queries using general purpose graphics processing unit (GPGPU).” 2015. Web. 13 Apr 2021.

Vancouver:

Wu H. Acceleration and execution of relational queries using general purpose graphics processing unit (GPGPU). [Internet] [Doctoral dissertation]. Georgia Tech; 2015. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/54405.

Council of Science Editors:

Wu H. Acceleration and execution of relational queries using general purpose graphics processing unit (GPGPU). [Doctoral Dissertation]. Georgia Tech; 2015. Available from: http://hdl.handle.net/1853/54405

24. Railing, Brian Paul. Collecting and representing parallel programs with high performance instrumentation.

Degree: PhD, Computer Science, 2015, Georgia Tech

 Computer architecture has looming challenges with finding program parallelism, process technology limits, and limited power budget. To navigate these challenges, a deeper understanding of parallel… (more)

Subjects/Keywords: Computer architecture; Compilers; Compiler-based instrumentation; Parallel programming; Parallel program analysis; Instrumentation performance; Task graph; Program representation; Heterogeneous computing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Railing, B. P. (2015). Collecting and representing parallel programs with high performance instrumentation. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/54431

Chicago Manual of Style (16th Edition):

Railing, Brian Paul. “Collecting and representing parallel programs with high performance instrumentation.” 2015. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/54431.

MLA Handbook (7th Edition):

Railing, Brian Paul. “Collecting and representing parallel programs with high performance instrumentation.” 2015. Web. 13 Apr 2021.

Vancouver:

Railing BP. Collecting and representing parallel programs with high performance instrumentation. [Internet] [Doctoral dissertation]. Georgia Tech; 2015. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/54431.

Council of Science Editors:

Railing BP. Collecting and representing parallel programs with high performance instrumentation. [Doctoral Dissertation]. Georgia Tech; 2015. Available from: http://hdl.handle.net/1853/54431

25. Mohapatra, Dushmanta. Coordinated memory management in virtualized environments.

Degree: PhD, Computer Science, 2015, Georgia Tech

 Two recent advances are the primary motivating factors for the research in my dissertation. First, virtualization is no longer confined to the powerful server class… (more)

Subjects/Keywords: Memory management; Virtualization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mohapatra, D. (2015). Coordinated memory management in virtualized environments. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/54454

Chicago Manual of Style (16th Edition):

Mohapatra, Dushmanta. “Coordinated memory management in virtualized environments.” 2015. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/54454.

MLA Handbook (7th Edition):

Mohapatra, Dushmanta. “Coordinated memory management in virtualized environments.” 2015. Web. 13 Apr 2021.

Vancouver:

Mohapatra D. Coordinated memory management in virtualized environments. [Internet] [Doctoral dissertation]. Georgia Tech; 2015. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/54454.

Council of Science Editors:

Mohapatra D. Coordinated memory management in virtualized environments. [Doctoral Dissertation]. Georgia Tech; 2015. Available from: http://hdl.handle.net/1853/54454

26. Jung, Changhee. Effective techniques for understanding and improving data structure usage.

Degree: PhD, Computer Science, 2013, Georgia Tech

 Turing Award winner Niklaus Wirth famously noted, `Algorithms + Data Structures = Programs', and it follows that data structures should be carefully considered for effective… (more)

Subjects/Keywords: Data structure identification; Memory graphs; Interface functions; Data structure selection; Application generator; Training framework; Performance counters; Data structures (Computer science)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jung, C. (2013). Effective techniques for understanding and improving data structure usage. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/49101

Chicago Manual of Style (16th Edition):

Jung, Changhee. “Effective techniques for understanding and improving data structure usage.” 2013. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/49101.

MLA Handbook (7th Edition):

Jung, Changhee. “Effective techniques for understanding and improving data structure usage.” 2013. Web. 13 Apr 2021.

Vancouver:

Jung C. Effective techniques for understanding and improving data structure usage. [Internet] [Doctoral dissertation]. Georgia Tech; 2013. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/49101.

Council of Science Editors:

Jung C. Effective techniques for understanding and improving data structure usage. [Doctoral Dissertation]. Georgia Tech; 2013. Available from: http://hdl.handle.net/1853/49101

27. Hou, Cong. Automated synthesis for program inversion.

Degree: PhD, Computer Science, 2013, Georgia Tech

 We consider the problem of synthesizing program inverses for imperative languages. Our primary motivation comes from optimistic parallel discrete event simulation (OPDES). There, a simulator… (more)

Subjects/Keywords: Program inversion; Program synthesis; SSA; Compiler; Computer simulation; Reversible computing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hou, C. (2013). Automated synthesis for program inversion. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/49037

Chicago Manual of Style (16th Edition):

Hou, Cong. “Automated synthesis for program inversion.” 2013. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/49037.

MLA Handbook (7th Edition):

Hou, Cong. “Automated synthesis for program inversion.” 2013. Web. 13 Apr 2021.

Vancouver:

Hou C. Automated synthesis for program inversion. [Internet] [Doctoral dissertation]. Georgia Tech; 2013. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/49037.

Council of Science Editors:

Hou C. Automated synthesis for program inversion. [Doctoral Dissertation]. Georgia Tech; 2013. Available from: http://hdl.handle.net/1853/49037

28. Dayal, Jai. Middleware for large scale in situ analytics workflows.

Degree: PhD, Computer Science, 2016, Georgia Tech

 The trend to exascale is causing researchers to rethink the entire computa- tional science stack, as future generation machines will contain both diverse hardware environments… (more)

Subjects/Keywords: In situ; High performance computing; Big data; Code coupling; Workflows

…33 9 Application Level Throughput on Sith and across Georgia Tech clusters. 34 10 High… …Ridge National Labs, and on the Windu and Jedi clusters hosted at Georgia Tech. The Sith… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Dayal, J. (2016). Middleware for large scale in situ analytics workflows. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/56355

Chicago Manual of Style (16th Edition):

Dayal, Jai. “Middleware for large scale in situ analytics workflows.” 2016. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/56355.

MLA Handbook (7th Edition):

Dayal, Jai. “Middleware for large scale in situ analytics workflows.” 2016. Web. 13 Apr 2021.

Vancouver:

Dayal J. Middleware for large scale in situ analytics workflows. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/56355.

Council of Science Editors:

Dayal J. Middleware for large scale in situ analytics workflows. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/56355

29. Zhang, Xin. Combining logical and probabilistic reasoning in program analysis.

Degree: PhD, Computer Science, 2017, Georgia Tech

 Software is becoming increasingly pervasive and complex. These trends expose masses of users to unintended software failures and deliberate cyber-attacks. A widely adopted solution to… (more)

Subjects/Keywords: Program analysis; Logic; Probability; Combined logical and probabilistic reasoning; Markov logic networks; Datalog; MaxSAT; Verification; Bug finding; Programming languages; Software engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zhang, X. (2017). Combining logical and probabilistic reasoning in program analysis. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/59200

Chicago Manual of Style (16th Edition):

Zhang, Xin. “Combining logical and probabilistic reasoning in program analysis.” 2017. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/59200.

MLA Handbook (7th Edition):

Zhang, Xin. “Combining logical and probabilistic reasoning in program analysis.” 2017. Web. 13 Apr 2021.

Vancouver:

Zhang X. Combining logical and probabilistic reasoning in program analysis. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/59200.

Council of Science Editors:

Zhang X. Combining logical and probabilistic reasoning in program analysis. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/59200

30. Kim, Minjang. Dynamic program analysis algorithms to assist parallelization.

Degree: PhD, Computing, 2012, Georgia Tech

 All market-leading processor vendors have started to pursue multicore processors as an alternative to high-frequency single-core processors for better energy and power efficiency. This transition… (more)

Subjects/Keywords: Parallel programming; Pallelization; Multi-core; Profiling; Program analysis; Compilers; Parallel programs (Computer programs); Computer programs; Software engineering; Computer science; Algorithms

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kim, M. (2012). Dynamic program analysis algorithms to assist parallelization. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/45758

Chicago Manual of Style (16th Edition):

Kim, Minjang. “Dynamic program analysis algorithms to assist parallelization.” 2012. Doctoral Dissertation, Georgia Tech. Accessed April 13, 2021. http://hdl.handle.net/1853/45758.

MLA Handbook (7th Edition):

Kim, Minjang. “Dynamic program analysis algorithms to assist parallelization.” 2012. Web. 13 Apr 2021.

Vancouver:

Kim M. Dynamic program analysis algorithms to assist parallelization. [Internet] [Doctoral dissertation]. Georgia Tech; 2012. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/1853/45758.

Council of Science Editors:

Kim M. Dynamic program analysis algorithms to assist parallelization. [Doctoral Dissertation]. Georgia Tech; 2012. Available from: http://hdl.handle.net/1853/45758

[1] [2]

.