You searched for +publisher:"Georgia Tech" +contributor:("Kim, Taesoo")
.
Showing records 1 – 22 of
22 total matches.
No search limiters apply to these results.

Georgia Tech
1.
Alzahrani, Ibtehaj M.
Identifying and clustering attack-driven crash reports using machine learning.
Degree: MS, Computer Science, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/62701
► We propose a tool to identify crashes caused by filed exploits from benign crashes, and cluster them based on the exploited vulnerabilities to prioritize crashes…
(more)
▼ We propose a tool to identify crashes caused by filed exploits from benign crashes, and cluster them based on the exploited vulnerabilities to prioritize crashes from a security point of view. The tool extracts features from crash reports and decides whether a crash caused by malicious behavior or not. In the case of malicious behavior, it identifies the attack type that generates the crash report; we are focusing on four attack types which are Heap exploitation, Shellcode injection, Format String attack, and Return Oriented Programming. Further, it clusters the crash reports based on the exploited vulnerabilities.
Advisors/Committee Members: Lee, Wenke (advisor), Ahamad, Mustaque (committee member), Kim, Taesoo (committee member).
Subjects/Keywords: Attack-driven crash reports
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Alzahrani, I. M. (2019). Identifying and clustering attack-driven crash reports using machine learning. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62701
Chicago Manual of Style (16th Edition):
Alzahrani, Ibtehaj M. “Identifying and clustering attack-driven crash reports using machine learning.” 2019. Masters Thesis, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/62701.
MLA Handbook (7th Edition):
Alzahrani, Ibtehaj M. “Identifying and clustering attack-driven crash reports using machine learning.” 2019. Web. 04 Mar 2021.
Vancouver:
Alzahrani IM. Identifying and clustering attack-driven crash reports using machine learning. [Internet] [Masters thesis]. Georgia Tech; 2019. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/62701.
Council of Science Editors:
Alzahrani IM. Identifying and clustering attack-driven crash reports using machine learning. [Masters Thesis]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62701

Georgia Tech
2.
Hesse, Michael Winfried.
Extending the lifecycle of IoT devices using selective deactivation.
Degree: MS, Computer Science, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/63605
► IoT devices are known for long-lived hardware and short-lived software support by the vendor, which sets the wrong security incentives for users of expensive IoT…
(more)
▼ IoT devices are known for long-lived hardware and short-lived software support by the vendor, which sets the wrong security incentives for users of expensive IoT systems. In order
to mitigate as many known vulnerabilities as possible after the vendor has stopped providing
security patches for an IoT device, we present a framework that allows the user to selectively disable single hardware components which provide non-essential features that are associated with said vulnerabilites. In the same way, the framework can also be used proactively to reduce the attack surface of an IoT device by disabling unused features. The user’s selection is enforced by a trusted computing base using different hardware security mechanisms on
the ARM platform. To this end, we analyze the common hardware architecture of embedded ARM systems using the example of the Raspberry Pi 4. We conclude that only virtualization provides a fine-grained enough partition capabilities for the purpose of partitioning the hardware into used and unused components. However, we also show how other security mechanisms including IOMMUs and ARM TrustZone could be used as an optimization in
some cases. Finally, we give a proof of concept implementation using the Raspberry Pi 4 and the Sense HAT as a simulation of a complex IoT device and show how 6 of its hardware
components can be selectively enabled and disabled.
Advisors/Committee Members: Kim, Taesoo (advisor), Saltaformaggio, Brendan (committee member), Ahamad, Mustaque (committee member).
Subjects/Keywords: IoT; Security; HAL; TCB
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hesse, M. W. (2020). Extending the lifecycle of IoT devices using selective deactivation. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/63605
Chicago Manual of Style (16th Edition):
Hesse, Michael Winfried. “Extending the lifecycle of IoT devices using selective deactivation.” 2020. Masters Thesis, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/63605.
MLA Handbook (7th Edition):
Hesse, Michael Winfried. “Extending the lifecycle of IoT devices using selective deactivation.” 2020. Web. 04 Mar 2021.
Vancouver:
Hesse MW. Extending the lifecycle of IoT devices using selective deactivation. [Internet] [Masters thesis]. Georgia Tech; 2020. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/63605.
Council of Science Editors:
Hesse MW. Extending the lifecycle of IoT devices using selective deactivation. [Masters Thesis]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/63605

Georgia Tech
3.
Bobek, Jan.
Hidden fallacies in formally verified systems.
Degree: MS, Computer Science, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/62855
► Formal verification or formal methods represent a rising trend in approaches to correct software construction, i.e. they help us answer the question of how to…
(more)
▼ Formal verification or formal methods represent a rising trend in approaches to correct software construction, i.e. they help us answer the question of how to build software that contains no errors, colloquially known as “bugs.” They achieve their goal by providing means for stating theorems about the program under test, and for proving such theorems by methods well-known in mathematics, specifically in mathematical logic. Of course, formal methods are no silver bullet and come with their own set of limitations, the most significant of which is extremely difficult scalability with software size. In spite of the limitations, there have been important breakthroughs in their applications over the last 10–15 years, e.g. Leroy’s CompCert (verified C compiler) or Klein’s seL4 (verified implementation of the L4 microkernel). However, how bug-free is verified software in reality? Formal methods make a bold claim that there are indeed no bugs in verified software, or more formally, that the software precisely implements its specification. Unfortunately, as an empirical study from 2017 by Fonseca et al. shows, it may be just too easy to introduce errors into the specification itself, either in form of mistakes (“specification bugs”), or in form of unanticipated assumptions. This thesis aims to take a broader look at formally verified software and formal verification systems, and identify the most common problems in formal methods’ applications, leading to bugs still being present in verified software. In particular, the main contribution of this thesis is an overview of several software projects employing formal methods at their core; an empirical study of “real-world” guarantees that the formal verification systems afford them; and, consequently, showcases of different approaches to verified software, along with their strengths and weaknesses. We believe that understanding how formal methods succeed and fail (or rather, how they can be misused) will be helpful in determining when they become an attractive and worthwhile choice for more ordinary (as opposed to top mission-critical) software. Indeed, we hope that this thesis may serve as an introductory guide for new projects to the guarantees provided by formal verification; correctness guarantees much stronger than those given by software testing. We hope that in long-term, entry barrier to formal verification will become sufficiently low for formal methods to enter mainstream software development, making developers more confident about their programs, and hopefully ridding our world of “buggy” software once and for all.
Advisors/Committee Members: Kim, Taesoo (advisor), Saltaformaggio, Brendan (committee member), Pu, Calton (committee member), Gavrilovska, Ada (committee member).
Subjects/Keywords: Formal verification; Fuzzing; File systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bobek, J. (2020). Hidden fallacies in formally verified systems. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62855
Chicago Manual of Style (16th Edition):
Bobek, Jan. “Hidden fallacies in formally verified systems.” 2020. Masters Thesis, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/62855.
MLA Handbook (7th Edition):
Bobek, Jan. “Hidden fallacies in formally verified systems.” 2020. Web. 04 Mar 2021.
Vancouver:
Bobek J. Hidden fallacies in formally verified systems. [Internet] [Masters thesis]. Georgia Tech; 2020. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/62855.
Council of Science Editors:
Bobek J. Hidden fallacies in formally verified systems. [Masters Thesis]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62855

Georgia Tech
4.
Flansburg, Kevin.
A framework for automated management of exploit testing environments.
Degree: MS, Computational Science and Engineering, 2015, Georgia Tech
URL: http://hdl.handle.net/1853/54912
► To demonstrate working exploits or vulnerabilities, people often share their findings as a form of proof-of-concept (PoC) prototype. Such practices are particularly useful to learn…
(more)
▼ To demonstrate working exploits or vulnerabilities, people often share
their findings as a form of proof-of-concept (PoC) prototype. Such practices are particularly useful to learn about real vulnerabilities and state-of-the-art exploitation techniques. Unfortunately, the shared PoC exploits are seldom reproducible; in part because they are often not
thoroughly tested, but largely because authors lack a formal way to specify the tested environment or its dependencies. Although exploit writers attempt to overcome such problems by describing their
dependencies or testing environments using comments, this informal way of sharing PoC exploits makes it hard for exploit authors to achieve the original goal of demonstration. More seriously, these non- or hard-to-reproduce PoC exploits have limited potential to be utilized for other useful research purposes such as penetration testing, or in
benchmark suites to evaluate defense mechanisms. In this paper, we present XShop, a framework and infrastructure to
describe environments and dependencies for exploits in a formal way, and to automatically resolve these constraints and construct an isolated environment for development, testing, and to share with the community. We show how XShop's flexible design enables new possibilities for
utilizing these reproducible exploits in five practical use cases: as a security benchmark suite, in pen-testing, for large scale vulnerability analysis, as a shared development environment, and for regression
testing. We design and implement such applications by extending the
XShop framework and demonstrate its effectiveness with twelve real
exploits against well-known bugs that include GHOST, Shellshock, and Heartbleed. We believe that the proposed practice not only brings immediate incentives to exploit authors but also has the potential to be
grown as a community-wide knowledge base.
Advisors/Committee Members: Kim, Taesoo (advisor), Antonakakis, Manos (committee member), Chau, Duen Horng (Polo) (committee member).
Subjects/Keywords: Software testing; Computer security
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Flansburg, K. (2015). A framework for automated management of exploit testing environments. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/54912
Chicago Manual of Style (16th Edition):
Flansburg, Kevin. “A framework for automated management of exploit testing environments.” 2015. Masters Thesis, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/54912.
MLA Handbook (7th Edition):
Flansburg, Kevin. “A framework for automated management of exploit testing environments.” 2015. Web. 04 Mar 2021.
Vancouver:
Flansburg K. A framework for automated management of exploit testing environments. [Internet] [Masters thesis]. Georgia Tech; 2015. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/54912.
Council of Science Editors:
Flansburg K. A framework for automated management of exploit testing environments. [Masters Thesis]. Georgia Tech; 2015. Available from: http://hdl.handle.net/1853/54912

Georgia Tech
5.
Lee, Byoungyoung.
Protecting computer systems through eliminating or analyzing vulnerabilities.
Degree: PhD, Computer Science, 2016, Georgia Tech
URL: http://hdl.handle.net/1853/58603
► There have been tremendous efforts to build fully secure computer systems, but it is not an easy goal. Making a simple mistake introduces a vulnerability,…
(more)
▼ There have been tremendous efforts to build fully secure computer systems, but it is not an easy goal. Making a simple mistake
introduces a vulnerability, which can critically endanger a whole system's security. This thesis aims at protecting computer systems from
vulnerabilities. We take two complementary approaches in achieving this goal, eliminating or analyzing vulnerabilities. In the vulnerability elimination approach, we eliminate a certain class of memory corruption vulnerabilities to completely close attack vectors from such vulnerabilities. In particular, we develop tools DangNull and CaVer, each of which eliminates popular and emerging vulnerabilities, use-after-free and bad-casting, respectively. DangNull relies on the key observation that the root cause of use-after-free is that pointers are not nullified after the target object is freed. Thus, DangNull instruments a program to trace the object's relationships via pointers and automatically nullifies all pointers when the target object is freed. Similarly, CaVer relies on the key observation that the root cause of bad-casting is that casting operations are not properly verified. Thus, CaVer uses a new runtime type tracing mechanism to overcome the limitation of existing approaches, and performs efficient verification on all type
casting operations dynamically. We have implemented these protection solutions and successfully applied them to Chrome and Firefox
browsers. Our evaluation showed that DangNull and CaVer imposes 29% and 7.6% benchmark overheads in Chrome, respectively. We have also tested seven use-after-free and five bad-casting exploits in Chrome, and DangNull and CaVer safely prevented them all. In the vulnerability analysis approach, we focus on a timing-channel vulnerability which allows an attacker to learn information about program's sensitive data without causing a program to perform unsafe operations. It is challenging to test and further confirm the timing-channel vulnerability as it typically involves complex algorithmic operations. We implemented SideFinder, an assistant tool identifying timing-channel vulnerabilities in a hash table. Empowered with symbolic execution techniques, SideFinder semi-automatically synthesizes inputs attacking timing-channels, and thus confirms the vulnerability. Using SideFinder, we analyzed and further synthesized two real-world attacks in the Linux kernel, and showed it can break one important security mechanism, Address Space Layout Randomization (ASLR).
Advisors/Committee Members: Lee, Wenke (advisor), Kim, Taesoo (advisor), Harris, William R. (committee member), Orso, Alessandro (committee member), Cui, Weidong (committee member).
Subjects/Keywords: Security; Vulnerability; Use after free; Bad casting; Timing channel
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lee, B. (2016). Protecting computer systems through eliminating or analyzing vulnerabilities. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/58603
Chicago Manual of Style (16th Edition):
Lee, Byoungyoung. “Protecting computer systems through eliminating or analyzing vulnerabilities.” 2016. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/58603.
MLA Handbook (7th Edition):
Lee, Byoungyoung. “Protecting computer systems through eliminating or analyzing vulnerabilities.” 2016. Web. 04 Mar 2021.
Vancouver:
Lee B. Protecting computer systems through eliminating or analyzing vulnerabilities. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/58603.
Council of Science Editors:
Lee B. Protecting computer systems through eliminating or analyzing vulnerabilities. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/58603

Georgia Tech
6.
Maass, Steffen Alexander.
Systems abstractions for big data processing on a single machine.
Degree: PhD, Computer Science, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/61679
► Large-scale internet services, such as Facebook or Google, are using clusters of many servers for problems such as search, machine learning, and social networks. However,…
(more)
▼ Large-scale internet services, such as Facebook or Google, are using clusters of many servers for problems such as search, machine learning, and social networks. However, while it may be possible to apply the tools used at this scale to smaller, more common problems as well, this dissertation presents approaches to large-scale data processing on only a single machine. This approach has obvious cost benefits and lowers the barrier of entrance to large-scale data processing. This dissertation approaches this problem by redesigning applications to enable trillion-scale graph processing on a single machine while also enabling the processing of evolving, billion-scale graphs. First, this dissertation presents a new out-of-core graph processing engine, called Mosaic, for executing graph algorithms on trillion-scale datasets on a single machine. Mosaic makes use of many-core processors and fast I/O devices coupled with a novel graph encoding scheme to allow processing of graphs of up to one trillion edges on a single machine. Mosaic also employs a locality-preserving, space-filling curve to allow for high compression and high locality when storing graphs and executing algorithms. Our evaluation shows that for smaller graphs, Mosaic consistently outperforms other state-of-the-art out-of-core engines by 3.2x–58.6x and shows comparable performance to distributed graph engines. Furthermore, Mosaic can complete one iteration of the Pagerank algorithm on a trillion-edge graph in 21 minutes, outperforming a distributed disk-based engine by 9.2x. Second, while Mosaic addresses the setting of processing static graph, this dissertation presents Cytom, a new engine for processing billion-scale evolving graphs based on insights about achieving high compression and locality while improving load-balancing when processing a graph that changes rapidly. Cytom introduces a novel programming model that takes advantage of its subgraph-centric approach coupled with the setting of evolving graphs. This is an important enabling step for emerging workloads when processing graphs that change over time. Cytom’s programming model allows algorithm developers to quickly react to graph updates, discarding uninteresting ones while focusing on updates that, in fact, change the algorithmic result. We show that Cytom is effective in scaling to billion-edge graphs, as well as providing higher throughput when updating the graph structure (2.0x–192x) and higher throughput (1.5x–200x) when additionally processing an algorithm.
Advisors/Committee Members: Kim, Taesoo (advisor), Gavrilovska, Ada (committee member), Ramachandran, Umakishore (committee member), Krishna, Tushar (committee member), Zwaenepoel, Willy (committee member).
Subjects/Keywords: Runtime system; Big data; Graph analytics; Performance optimization; Incremental processing; Heterogeneous computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Maass, S. A. (2019). Systems abstractions for big data processing on a single machine. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61679
Chicago Manual of Style (16th Edition):
Maass, Steffen Alexander. “Systems abstractions for big data processing on a single machine.” 2019. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/61679.
MLA Handbook (7th Edition):
Maass, Steffen Alexander. “Systems abstractions for big data processing on a single machine.” 2019. Web. 04 Mar 2021.
Vancouver:
Maass SA. Systems abstractions for big data processing on a single machine. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/61679.
Council of Science Editors:
Maass SA. Systems abstractions for big data processing on a single machine. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61679

Georgia Tech
7.
Shih, Mingwei.
Securing Intel SGX against side-channel attacks via load-time synthesis.
Degree: PhD, Computer Science, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/62337
► In response to the growing need for securing user data in the cloud, recent Intel processors have supported a new feature, Intel Software Guard Extensions…
(more)
▼ In response to the growing need for securing user data in the cloud, recent Intel processors have supported a new feature, Intel Software Guard Extensions (SGX). SGX allows a program to execute in isolation
from the rest of the underlying system. Thus, even after compromising the system, neither cloud providers nor attackers can gain access to data that the program processes. Unfortunately, recent studies have shown that such isolation is bypassable via side-channel attacks (SCAs). In particular, SCAs against SGX are more critical under the extreme assumption (i.e., attackers compromise the system), allowing attackers to infer fine-grained information from an SGX-protected program. Toward practical defenses against SCAs on SGX, the first part of the thesis presents two mitigation techniques, SGX-Armor and T-SGX, both of which require neither hardware- nor source-code-level modifications and incur moderate runtime overhead to the program. SGX-Armor is a general-purpose defense based on Address Space Layout Randomization (ASLR) that obfuscates the memory layout of the program, preventing attackers from interpreting side-channel information. Unlike traditional ASLR implementations, SGX-Armor incorporates a secure algorithm that shuffles memory layout without revealing the information of the layout through any of the known side channels. T-SGX is a novel defense against controlled-channel attacks that exploit page faults as a side channel. By using Intel Transactional Synchronization Extensions (TSX) as a primitive that suppresses page faults, T-SGX automatically transfers a program into a protected one at compile time. The second part of the thesis presents Pridwen, a framework that addresses the challeenges of combining multiple mitigation techniques such as SGX-Armor and T-SGX, thereby providing a broader scope of protection against SCAs on SGX. Using load-time synthesis, Pridwen adaptively enforces mitigation schemes to a program in distinct cloud
environments. The prototype of Pridwen has supported four mitigation
schemes that secure SGX programs again various SCAs while minimizing the incurred runtime overhead according to the configuration of the
environment.
Advisors/Committee Members: Kim, Taesoo (advisor), Lee, Wenke (committee member), Peinado, Marcus (committee member), Steiner, Michael (committee member), Saltaformaggio, Brendan (committee member).
Subjects/Keywords: Intel SGX; Side-channel attacks
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shih, M. (2019). Securing Intel SGX against side-channel attacks via load-time synthesis. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62337
Chicago Manual of Style (16th Edition):
Shih, Mingwei. “Securing Intel SGX against side-channel attacks via load-time synthesis.” 2019. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/62337.
MLA Handbook (7th Edition):
Shih, Mingwei. “Securing Intel SGX against side-channel attacks via load-time synthesis.” 2019. Web. 04 Mar 2021.
Vancouver:
Shih M. Securing Intel SGX against side-channel attacks via load-time synthesis. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/62337.
Council of Science Editors:
Shih M. Securing Intel SGX against side-channel attacks via load-time synthesis. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62337

Georgia Tech
8.
Song, Chengyu.
Preventing exploits against memory corruption vulnerabilities.
Degree: PhD, Computer Science, 2016, Georgia Tech
URL: http://hdl.handle.net/1853/55651
► The most common cyber-attack vector is exploit of software vulnerability. Despite much efforts toward building secure software, software systems of even modest complexity still routinely…
(more)
▼ The most common cyber-attack vector is exploit of software vulnerability. Despite much efforts toward building secure software, software systems of even modest complexity still routinely have serious vulnerabilities. More alarmingly, even the trusted computing base (e.g. OS kernel) still contains vulnerabilities that would allow attackers to subvert security mechanisms such as the application sandbox on smartphones. Among all vulnerabilities, memory corruption is one of the most ancient, prevalent, and devastating vulnerabilities. This thesis proposed three projects on mitigating this threat. There are three popular ways to exploit a memory corruption vulnerability – attacking the code (a.k.a. code injection attack), the control data (a.k.a. control-flow hijacking attack), and the non-control data (a.k.a. data-oriented attack). Theoretically, code injection attack can be prevented with the executable XOR writable policy; but in practice, this policy is undermined by another important technique – dynamic code generation (e.g. JIT engines). In the first project, we first showed that this conflict is actually non-trivial to resolve, then we introduced a new design paradigm to fundamentally solve this problem, by relocating the dynamic code generator to a separate process. In the second project, we focused on preventing data-oriented attacks against operating system kernel. Using privilege escalation attacks as an example, we (1) demonstrated that data-oriented attacks are realistic threats and hard to prevent; (2) discussed two important challenges for preventing such attacks (i.e., completeness and performance); and (3) presented a system that combines program analysis techniques and system designs to solve these challenges. During these two projects, we found that lacking sufficient hardware support imposes many unnecessary difficulties in building robust and efficient defense mechanisms. In the third project, we proposed HDFI (hardware-assisted data-flow isolation) to overcome this limitation. HDFI is a new fine-grained isolation mechanism that enforces isolation at the machine word granularity, by virtually extending each memory unit with an additional tag that is defined by data-flow. This capability allows HDFI to enforce a variety of security models such as the Biba Integrity Model and the Bell – LaPadula Model. For demonstration, we developed and ported several security mechanisms to leverage HDFI, including stack protection, standard library enhancement, virtual function table protection, code pointer protection, kernel data protection, and information leak prevention. The evaluation results showed that HDFI is easy to use, imposes low performance overhead, and allows us to create simpler and more secure solutions.
Advisors/Committee Members: Lee, Wenke (advisor), Kim, Taesoo (advisor), Harris, William R. (committee member), Ahamad, Mustaque (committee member), Cui, Weidong (committee member).
Subjects/Keywords: Memory corruption; Exploit prevention; Code injection; Privilege escalation; Data-flow integrity
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Song, C. (2016). Preventing exploits against memory corruption vulnerabilities. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/55651
Chicago Manual of Style (16th Edition):
Song, Chengyu. “Preventing exploits against memory corruption vulnerabilities.” 2016. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/55651.
MLA Handbook (7th Edition):
Song, Chengyu. “Preventing exploits against memory corruption vulnerabilities.” 2016. Web. 04 Mar 2021.
Vancouver:
Song C. Preventing exploits against memory corruption vulnerabilities. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/55651.
Council of Science Editors:
Song C. Preventing exploits against memory corruption vulnerabilities. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/55651

Georgia Tech
9.
Xu, Meng.
Finding race conditions in kernels: The symbolic way and the fuzzy way.
Degree: PhD, Computer Science, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/63668
► The scale and pervasiveness of concurrent software pose challenges for security researchers: race conditions are more prevalent than ever, and the growing software complexity keeps…
(more)
▼ The scale and pervasiveness of concurrent software pose challenges for security researchers: race conditions are more prevalent than ever, and the growing software complexity keeps exacerbating the situation – expanding the arms race between security practitioners and attackers beyond memory errors. As a consequence, we need a new generation of bug hunting tools that not only scale well with increasingly larger codebases but also catch up with the growing importance of race conditions. In this thesis, two complementary race detection frameworks for OS kernels are presented: multi-dimensional fuzz testing and symbolic checking. Fuzz testing turns bug finding into a probabilistic search, but current practices restrict themselves to one dimension only (sequential executions). This thesis illustrates how to explore the concurrency dimension and extend the bug scope beyond memory errors to the broad spectrum of concurrency bugs. On the other hand, conventional symbolic executors face challenges when applied to OS kernels, such as path explosions due to branching and loops. They also lack a systematic way of modeling and tracking constraints in the concurrency dimension (e.g., to enforce a particular schedule for thread interleavings) The gap can be partially filled with novel techniques for symbolic execution in this thesis.
Advisors/Committee Members: Kim, Taesoo (advisor), Lee, Wenke (committee member), Orso, Alessandro (committee member), Saltaformaggio, Brendan D. (committee member), Peinado, Marcus (committee member).
Subjects/Keywords: Race condition; Fuzz testing; Symbolic execution; Bug finding; OS kernel
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xu, M. (2020). Finding race conditions in kernels: The symbolic way and the fuzzy way. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/63668
Chicago Manual of Style (16th Edition):
Xu, Meng. “Finding race conditions in kernels: The symbolic way and the fuzzy way.” 2020. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/63668.
MLA Handbook (7th Edition):
Xu, Meng. “Finding race conditions in kernels: The symbolic way and the fuzzy way.” 2020. Web. 04 Mar 2021.
Vancouver:
Xu M. Finding race conditions in kernels: The symbolic way and the fuzzy way. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/63668.
Council of Science Editors:
Xu M. Finding race conditions in kernels: The symbolic way and the fuzzy way. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/63668

Georgia Tech
10.
Kashyap, Sanidhya.
Scaling synchronization primitives.
Degree: PhD, Computer Science, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/63677
► Over the past decade, multicore machines have become the norm. A single machine is capable of having thousands of hardware threads or cores. Even cloud…
(more)
▼ Over the past decade, multicore machines have become the norm. A single machine is capable of having thousands of hardware threads or cores. Even cloud providers offer such
large multicore machines for data processing engines and databases. Thus, a fundamental question arises is how efficient are existing synchronization primitives— timestamping and locking—that developers use for designing concurrent, scalable, and performant applications. This dissertation focuses on understanding the scalability aspect of these primitives, and
presents new algorithms and approaches, that either leverage the hardware or the application
domain knowledge, to scale up to hundreds of cores. First, the thesis presents Ordo , a scalable ordering or timestamping primitive, that forms
the basis of designing scalable timestamp-based concurrency control mechanisms. Ordo relies on invariant hardware clocks and provides a notion of a globally synchronized clock
within a machine. We use the Ordo primitive to redesign a synchronization mechanism and concurrency control mechanisms in databases and software transactional memory. Later, this thesis focuses on the scalability aspect of locks in both virtualized and non-virtualized scenarios. In a virtualized environment, we identify that these locks suffer from
various preemption issues due to a semantic gap between the hypervisor shceduler and a virtual machine scheduler—the double scheduling problem. We address this problem
by bridging this gap, in which both the hypervisor and virtual machines share minimal scheduling information to avoid the preemption problems. Finally, we focus on the design of lock algorithms in general. We find that locks in practice have discrepancies from locks in design. For example, popular spinlocks suffer from excessive cache-line bouncing in multicore (NUMA) systems, while state-of-the-art locks exhibit sub-par single-thread performance. We classify several dominating factors that impact the performance of lock algorithms. We then propose a new technique, shuffling, that can dynamically accommodate all these factors, without slowing down the critical path of the lock. The key idea of shuffling is to re-order the queue of threads waiting to acquire the lock with some pre-established policy. Using shuffling, we propose a family of locking algorithms, called SHFLLOCKS that respect all factors, efficiently utilize waiters, and achieve the best performance.
Advisors/Committee Members: Kim, Taesoo (advisor), Min, Changwoo (advisor), Gavrilovska, Ada (committee member), Calciu, Irina (committee member), Arulraj, Joy (committee member).
Subjects/Keywords: OS; Concurrency; Mutual exclusion; File system; Scalability; Timestamping; Database
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kashyap, S. (2020). Scaling synchronization primitives. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/63677
Chicago Manual of Style (16th Edition):
Kashyap, Sanidhya. “Scaling synchronization primitives.” 2020. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/63677.
MLA Handbook (7th Edition):
Kashyap, Sanidhya. “Scaling synchronization primitives.” 2020. Web. 04 Mar 2021.
Vancouver:
Kashyap S. Scaling synchronization primitives. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/63677.
Council of Science Editors:
Kashyap S. Scaling synchronization primitives. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/63677

Georgia Tech
11.
Yun, Insu.
Concolic Execution Tailored for Hybrid Fuzzing.
Degree: PhD, Computer Science, 2020, Georgia Tech
URL: http://hdl.handle.net/1853/64153
► Recently, hybrid fuzzing, which combines fuzzing and concolic execution, has been highlighted to overcome limitations of both techniques. Despite its success in contrived programs such…
(more)
▼ Recently, hybrid fuzzing, which combines fuzzing and concolic execution, has been highlighted to overcome limitations of both techniques. Despite its success in contrived programs such as DARPA Cyber Grand Challenge (CGC), it still falls short in finding bugs in real-world software due to its low performance of existing concolic executors.
To address this issue, this dissertation suggests and demonstrates concolic execution tailored for hybrid fuzzing with two systems: QSYM and Hybridra. First, we present QSYM, a binary-only concolic executor tailored for hybrid fuzzing. It significantly improves the performance of conventional concolic executors by removing redundant symbolic emulations for a binary. Moreover, to efficiently produce test cases for fuzzing, even sacrificing its soundness, QSYM introduces two key techniques: optimistic solving and basic block pruning. As a result, QSYM outperforms state-of-the-art fuzzers, and, more importantly, it found 13 new bugs in eight real-world programs, including file, ffmpeg, and OpenJPEG.
Enhancing the key idea of QSYM, we discuss Hybridra, a new concolic executor for file systems. To apply hybrid fuzzing for file systems, which are gigantic and convoluted, Hybridra employs compilation-based concolic execution to boost concolic execution leveraging the existing of source code. Moreover, Hybridra introduces a new technique called staged reduction, which combines existing heuristics to efficiently generate test cases for file systems. Consequently, Hybridra outperforms a state-of-the-art file system fuzzer, Hydra, by achieving higher code coverage, and successfully discovered four new bugs in btrfs, which has been heavily tested by other fuzzers.
Advisors/Committee Members: Kim, Taesoo (advisor), Lee, Wenke (committee member), Cui, Weidong (committee member), Naik, Mayur (committee member), Orso, Alessandro (committee member).
Subjects/Keywords: Hybrid fuzzing; Concolic execution; Fuzzing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yun, I. (2020). Concolic Execution Tailored for Hybrid Fuzzing. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/64153
Chicago Manual of Style (16th Edition):
Yun, Insu. “Concolic Execution Tailored for Hybrid Fuzzing.” 2020. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/64153.
MLA Handbook (7th Edition):
Yun, Insu. “Concolic Execution Tailored for Hybrid Fuzzing.” 2020. Web. 04 Mar 2021.
Vancouver:
Yun I. Concolic Execution Tailored for Hybrid Fuzzing. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/64153.
Council of Science Editors:
Yun I. Concolic Execution Tailored for Hybrid Fuzzing. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/64153
12.
Forster, Jeffrey Edward.
Using Intel® SGX technologies to secure large scale systems in public cloud environments.
Degree: MS, Computer Science, 2018, Georgia Tech
URL: http://hdl.handle.net/1853/59840
► Intel SGX enables securing applications at the hardware level, which makes it a very useful tool for running applications on untrusted hosts. Security based on…
(more)
▼ Intel SGX enables securing applications at the hardware level, which makes it a very
useful tool for running applications on untrusted hosts. Security based on hardware mechanisms
is a rapidly evolving research area. SGX and similar tools have great potential to
enhance security for many sensitive applications. Researchers are currently building tools
that aid in porting of applications for use with the hardware based security. My work has
demonstrated both benefits and limitations of the current SGX environment and associated
tools. The current technology cannot support large applications such as MySQL server,
and more work will need to be done to enable deployment of large-scale solutions.
Advisors/Committee Members: Kim, Taesoo (advisor), Lee, Wenke (committee member), Gavrilovska, Ada (committee member).
Subjects/Keywords: Intel; SGX; Enclave; Security; SQL; MySQL
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Forster, J. E. (2018). Using Intel® SGX technologies to secure large scale systems in public cloud environments. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/59840
Chicago Manual of Style (16th Edition):
Forster, Jeffrey Edward. “Using Intel® SGX technologies to secure large scale systems in public cloud environments.” 2018. Masters Thesis, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/59840.
MLA Handbook (7th Edition):
Forster, Jeffrey Edward. “Using Intel® SGX technologies to secure large scale systems in public cloud environments.” 2018. Web. 04 Mar 2021.
Vancouver:
Forster JE. Using Intel® SGX technologies to secure large scale systems in public cloud environments. [Internet] [Masters thesis]. Georgia Tech; 2018. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/59840.
Council of Science Editors:
Forster JE. Using Intel® SGX technologies to secure large scale systems in public cloud environments. [Masters Thesis]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/59840

Georgia Tech
13.
Huang, Jian.
Exploiting intrinsic flash properties to enhance modern storage systems.
Degree: PhD, Computer Science, 2017, Georgia Tech
URL: http://hdl.handle.net/1853/60162
► The longstanding goals of storage system design have been to provide simple abstractions for applications to efficiently access data while ensuring the data durability and…
(more)
▼ The longstanding goals of storage system design have been to provide simple abstractions for applications to efficiently access data while ensuring the data durability and security on a hardware device. The traditional storage system, which was designed for slow hard disk drive with little parallelism, does not fit for the new storage technologies such as the faster flash memory with high internal parallelism. The gap between the storage system software and flash device causes both resource inefficiency and sub-optimal performance. This dissertation focuses on the rethinking of the storage system design for flash memory with a holistic approach from the system level to the device level and revisits several critical aspects of the storage system design including the storage performance, performance isolation, energy-efficiency, and data security. The traditional storage system lacks full performance isolation between applications sharing the device because it does not make the software aware of the underlying flash properties and constraints. This dissertation proposes FlashBlox, a storage virtualization system that utilizes flash parallelism to provide hardware isolation between applications by assigning them on dedicated chips. FlashBlox reduces the tail latency of storage operations dramatically compared with the existing software-based isolation techniques while achieving uniform lifetime for the flash device. As the underlying flash device latency is reduced significantly compared to the conventional hard disk drive, the storage software overhead has become the major bottleneck. This dissertation presents FlashMap, a holistic flash-based storage stack that combines memory, storage and device-level indirections into a unified layer. By combining these layers, FlashMap reduces critical-path latency for accessing data in the flash device and improves DRAM caching efficiency significantly for flash management. The traditional storage software incurs energy-intensive storage operations due to the need for maintaining data durability and security for personal data, which has become a significant challenge for resource-constrained devices such as mobiles and wearables. This dissertation proposes WearDrive, a fast and energy-efficient storage system for wearables. WearDrive treats the battery-backed DRAM as non-volatile memory to store personal data and trades the connected phone’s battery for the wearable’s by performing large and energy-intensive tasks on the phone while performing small and energy-efficient tasks locally using battery-backed DRAM. WearDrive improves wearable’s battery life significantly with negligible impact to the phone’s battery life. The storage software which has been developed for decades is still vulnerable to malware attacks. For example, the encryption ransomware which is a malicious software that stealthily encrypts user files and demands a ransom to provide access to these files. Prior solutions such as ransomware detection and data backups have been proposed to defend against encryption…
Advisors/Committee Members: Qureshi, Moinuddin K. (advisor), Ramachandran, Umakishore (committee member), Kim, Taesoo (committee member), Swanson, Steven (committee member), Mickens, James (committee member), Badam, Anirudh (committee member).
Subjects/Keywords: Flash memory; Storage systems; Cloud storage; Wearable storage; Performance isolation; System security
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Huang, J. (2017). Exploiting intrinsic flash properties to enhance modern storage systems. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/60162
Chicago Manual of Style (16th Edition):
Huang, Jian. “Exploiting intrinsic flash properties to enhance modern storage systems.” 2017. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/60162.
MLA Handbook (7th Edition):
Huang, Jian. “Exploiting intrinsic flash properties to enhance modern storage systems.” 2017. Web. 04 Mar 2021.
Vancouver:
Huang J. Exploiting intrinsic flash properties to enhance modern storage systems. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/60162.
Council of Science Editors:
Huang J. Exploiting intrinsic flash properties to enhance modern storage systems. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/60162

Georgia Tech
14.
Bhardwaj, Ketan.
Frame, rods and beads of the edge computing abacus.
Degree: PhD, Computer Science, 2016, Georgia Tech
URL: http://hdl.handle.net/1853/56332
► Emerging applications enabled by powerful end-user devices and 5G technologies, pose demands for reduced access latencies to web services and dramatic increase in the back-haul…
(more)
▼ Emerging applications enabled by powerful end-user devices and 5G technologies, pose demands for reduced access latencies to web services and dramatic increase in the back-haul network capacity. In response, edge computing – the use of computational resources closer to end devices, at the edge of network – is becoming an attractive approach to addressing these demands. Going beyond point solutions, the vision of edge computing is to enable web services to deploy their edge functions (EF) in a multi-tenant infrastructure present at the edge of the mobile networks. However, there are critical technical challenges that need to be addressed to make that vision possible. This dissertation addresses three
such critical challenges:
1. Demonstration of benefits of edge functions for real world, highly dynamic and large scale Android app ecosystem: (i) AppFlux and AppSachets, to relieve bandwidth pressure due to existing app delivery mechanisms, (ii) Ephemeral apps and app slices that rethink the app delivery for emerging app usage models, to highlight that the edge computing can enable transformational changes in the computing landscape beyond just latency and bandwidth optimizations. 2. Design and implementation of AirBox – a secure, lightweight and flexible edge function platform needed by web services to deploy and manage their EFs on edge computing nodes on-demand. AirBox is based on a detailed experimental design space exploration for system level mechanisms that are suitable for an edge function platform to address the technical challenges associated with provisioning, management and EF security. AirBox leverages state-of-the-art hardware-assisted and OS-agnostic security features, such as Intel SGX, to prescribe a reference design of a secure EF. 3. Finally, a solution to the most critical issue of enabling edge functions while preserving end to end security guarantees. Today, when most web services are delivered over encrypted traffic, it is impossible for edge functions to provide meaningful functionalities without compromising security or obviating performance benefits of EFs. Secure protocol extensions (SPX) can efficiently maintain the proposed End-to-Edge-to-End (E3) security semantics. Using SPX, we accomplish the seemingly impossible task of allowing edge functions to operate on encrypted traffic transmitted over secure protocols with modest overheads, while ensuring their security semantics, and continuing to provide the benefits of edge computing.
Advisors/Committee Members: Gavrilovska, Ada (advisor), Schwan, Karsten (advisor), Liu, Ling (committee member), Silva, Dilma Da (committee member), Ammar, Mostafa (committee member), Zegura, Ellen (committee member), Kim, Taesoo (committee member).
Subjects/Keywords: Edge computing; Edge functions; Secure edge functions; Secure protocol extensions; Android app streaming; Ephemeral apps; App slices
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bhardwaj, K. (2016). Frame, rods and beads of the edge computing abacus. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/56332
Chicago Manual of Style (16th Edition):
Bhardwaj, Ketan. “Frame, rods and beads of the edge computing abacus.” 2016. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/56332.
MLA Handbook (7th Edition):
Bhardwaj, Ketan. “Frame, rods and beads of the edge computing abacus.” 2016. Web. 04 Mar 2021.
Vancouver:
Bhardwaj K. Frame, rods and beads of the edge computing abacus. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/56332.
Council of Science Editors:
Bhardwaj K. Frame, rods and beads of the edge computing abacus. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/56332
15.
Jin, Wei.
Automated support for reproducing and debugging field failures.
Degree: PhD, Computer Science, 2015, Georgia Tech
URL: http://hdl.handle.net/1853/53894
► As confirmed by a recent survey conducted among developers of the Apache, Eclipse, and Mozilla projects, two extremely challenging tasks during maintenance are reproducing and…
(more)
▼ As confirmed by a recent survey conducted among developers of the Apache, Eclipse, and Mozilla projects, two extremely challenging tasks during maintenance are reproducing and debugging field failures – failures that occur on user machines after release. In my PhD study, I have developed several techniques to address and mitigate the problems of reproducing and debugging field failures. In this defense, I will present an overview of my work and describe in detail four different techniques: BugRedux, F3, Clause Weighting (CW), and On-demand Formula Computation (OFC). BugRedux is a general technique for reproducing field failures that collects dynamic data about failing executions in the field and uses this data to synthesize executions that mimic the observed field failures. F3 leverages the executions generated by BugRedux to perform automated debugging using a set of suitably optimized fault-localization techniques. OFC and CW improves the overall effectiveness and efficiency of state-of-the-art formula-based debugging. In addition to the presentation of these techniques, I will also present an empirical evaluation of the techniques on a set of real-world programs and field failures. The results of the evaluation are promising in that, for all the failures considered, my approach was able to (1) synthesize failing executions that mimicked the observed field failures, (2) synthesize passing executions similar to the failing ones, and (3) use the synthesized executions successfully to perform fault localization with accurate results.
Advisors/Committee Members: Orso, Alessandro (advisor), Prvulovic, Milos (committee member), Naik, Mayur (committee member), Kim, Taesoo (committee member), Chandra, Satish (committee member).
Subjects/Keywords: Debugging; Fault localization; Field failures
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jin, W. (2015). Automated support for reproducing and debugging field failures. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/53894
Chicago Manual of Style (16th Edition):
Jin, Wei. “Automated support for reproducing and debugging field failures.” 2015. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/53894.
MLA Handbook (7th Edition):
Jin, Wei. “Automated support for reproducing and debugging field failures.” 2015. Web. 04 Mar 2021.
Vancouver:
Jin W. Automated support for reproducing and debugging field failures. [Internet] [Doctoral dissertation]. Georgia Tech; 2015. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/53894.
Council of Science Editors:
Jin W. Automated support for reproducing and debugging field failures. [Doctoral Dissertation]. Georgia Tech; 2015. Available from: http://hdl.handle.net/1853/53894
16.
Narain, Abhinav.
Near field deniable communication.
Degree: PhD, Computer Science, 2017, Georgia Tech
URL: http://hdl.handle.net/1853/58703
► There is an increasing interest of companies and government agencies to snoop on people's daily lives the increasing difficulty for people to handle such scenarios.…
(more)
▼ There is an increasing interest of companies and government agencies
to snoop on people's daily lives the increasing difficulty for people
to handle such scenarios. The need for private communications is
perhaps greater than ever before. Officials at the
NSA have stated that “if you have enough meta-data you don’t really
need content” and that “we kill people based on
meta-data”. People have long needed to keep the
communications among themselves private, but, increasingly, they may
want to conceal not only the messages that they exchange, but also
with whom they are communicating – or even the fact that they are
communicating at all. This latter type of communication is said to be
not only confidential and anonymous but also deniable, in the sense
that despite exchanging messages, participants can plausibly deny that
any such exchanges ever took place.
This dissertation develops techniques and systems that empower users
in physical proximity to have mechanisms for deniable
communications. Our work builds from the observation of noise in the
surrounding technologies like wireless networks or powerline networks. The
thesis particularly uses noise instead of protocol obfuscation to
create deniable channels between individuals who do not want any third
party to recognize that there is possible communication in progress.
Working with collaborators at
Georgia Tech, I have built two systems
to explore two approaches at the link layer of wireless channel
in 802.11 protocol by means of Denali. Looking for alternate technologies I
stumbled upon innocuous-looking powerline networks
and led to the work Powerline Whisperer, where I
explored using the physical layer in powerline cables to do deniable
communication. Due to lack of available cover, the system does not
presume any already established communication channel to exchange a
message but rather depends on the thermal noise and the
electromagnetic interference due to devices present in the medium.
Advisors/Committee Members: Feamster, Nick (advisor), Ammar, Mostafa (committee member), Kim, Taesoo (committee member), Snoeren, Alex (committee member), Venkateswaran, Hariharan (committee member).
Subjects/Keywords: Deniable; Communication
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Narain, A. (2017). Near field deniable communication. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/58703
Chicago Manual of Style (16th Edition):
Narain, Abhinav. “Near field deniable communication.” 2017. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/58703.
MLA Handbook (7th Edition):
Narain, Abhinav. “Near field deniable communication.” 2017. Web. 04 Mar 2021.
Vancouver:
Narain A. Near field deniable communication. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/58703.
Council of Science Editors:
Narain A. Near field deniable communication. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/58703
17.
Jang, Yeong Jin.
Building trust in the user I/O in computer systems.
Degree: PhD, Computer Science, 2017, Georgia Tech
URL: http://hdl.handle.net/1853/58732
► User input plays an essential role in computer security because it can control system behavior and make security decisions in the system. System output to…
(more)
▼ User input plays an essential role in computer security because it can control system behavior and make security decisions in the system. System output to users, or user output, is also important because it often contains security-critical information that must be protected regarding its integrity and confidentiality, such as passwords and user’s private data. Despite the importance of user input and output (I/O), modern computer systems often fail to provide necessary security guarantees on them, which could result in serious security breaches. This dissertation aims to build trust in the user I/O in computer systems to keep the systems secure from attacks on the user I/O. To this end, we analyze the user I/O paths on popular platforms including desktop operating systems, mobile operating systems, and trusted execution environments such as Intel SGX, and identified that threats and attacks on the user I/O can be blocked by guaranteeing three key security properties of user I/O: integrity, confidentiality, and authenticity. First, GYRUS addresses the integrity of user input by matching the user’s original input with the content of outgoing network traffic to authorize user-intended network transactions. Second, M-AEGIS addresses the confidentiality of user I/O by implementing an encryption layer on top of user interface layer that provides user-to-user encryption. Third, the A11Y ATTACK addresses the importance of verifying user I/O authenticity by demonstrating twelve new attacks. Finally, to establish trust in the user I/O in a commodity computer system, I built a system called SGX-USB, which combines all three security properties to ensure the assurance of user I/O. The implemented system supports common user input devices such as a keyboard and a mouse over the trusted channel. Having assurance in user I/O allows the computer system to securely handle commands and data from the user by eliminating attack pathways to a system’s I/O paths.
Advisors/Committee Members: Lee, Wenke (advisor), Kim, Taesoo (advisor), Ahamad, Mustaque (committee member), Li, Kang (committee member), Kim, Yongdae (committee member).
Subjects/Keywords: Security; I/O
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jang, Y. J. (2017). Building trust in the user I/O in computer systems. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/58732
Chicago Manual of Style (16th Edition):
Jang, Yeong Jin. “Building trust in the user I/O in computer systems.” 2017. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/58732.
MLA Handbook (7th Edition):
Jang, Yeong Jin. “Building trust in the user I/O in computer systems.” 2017. Web. 04 Mar 2021.
Vancouver:
Jang YJ. Building trust in the user I/O in computer systems. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/58732.
Council of Science Editors:
Jang YJ. Building trust in the user I/O in computer systems. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/58732
18.
Kumar, Mohan Kumar.
Taming latency in data center applications.
Degree: PhD, Computer Science, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/61693
► A new breed of low-latency I/O devices, such as the emerging remote memory access and the high-speed Ethernet NICs, are becoming ubiquitous in current data…
(more)
▼ A new breed of low-latency I/O devices, such as the emerging remote memory access and the high-speed Ethernet NICs, are becoming ubiquitous in current data centers. For example, big data center operators such as Amazon, Facebook, Google, and Microsoft are already migrating their networks to 100G. However, the overhead incurred by the system software, such as protocol stack and synchronous operations, is dominant with these faster I/O devices. This dissertation approaches the above problem by redesigning a protocol stack to provide an interface for the latency-sensitive operation, and redesigning synchronous operation such as TLB shootdown and consensus in the operating systems and distributed systems respectively.
First, the dissertation presents an extensible protocol stack, XPS to address the software overhead incurred in protocol stacks such as TCP and UDP. XPS provides the abstractions to allow an application-defined, latency-sensitive operation to run immediately after the protocol processing (called the fast path) in various protocol stacks: in a commodity OS protocol stack (e.g., Linux), a user space protocol stack (e.g., mTCP), as well as recent smart
NICs. For all other operations, XPS retains the popular, well-understood socket interface. XPS ’ approach is practical: rather than proposing a new OS or removing the socket interface completely, our goal is to provide stack extensions for latency-sensitive operations and use the existing socket layer for all other operations. Second, the dissertation provides a lazy, asynchronous mechanism to address the system software overhead incurred due to a synchronous operation TLB shootdown. The key idea of the lazy shootdown mechanism, called LATR , is to use lazy memory reclamation and lazy page table unmap to perform an asynchronous TLB shootdown. By handling TLB shootdowns in a lazy fashion, LATR can eliminate the performance overheads associated with IPI mechanisms as well as the waiting time for acknowledgments from remote cores. By proposing an asynchronous mechanism, LATR provides an eventually consistent solution. Finally, the dissertation untangles the logically coupled consensus mechanism from the application which alleviates the overhead incurred by consensus algorithms such as Multi Paxos/Viewstamp Replication(VR). By physical isolation, DYAD eliminates the consensus component from competing for system resources with the application which improves
the application performance. To provide physical isolation, DYAD defines the abstraction needed from the SmartNIC and the operations performed on the application running on the host processor. With the resulting consensus mechanism, the host processor handles only the client requests on the host processor in the normal case and the disappropriate messages needed for consensus is handled on the SmartNIC.
Advisors/Committee Members: Kim, Taesoo (advisor), Gavrilovska, Ada (committee member), Ramachandran, Umakishore (committee member), Krishna, Tushar (committee member), Jang, Keon (committee member).
Subjects/Keywords: Latency; Data center; Smart NICs; Protocol stack; Consensus; TLB shootdown
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kumar, M. K. (2019). Taming latency in data center applications. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61693
Chicago Manual of Style (16th Edition):
Kumar, Mohan Kumar. “Taming latency in data center applications.” 2019. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/61693.
MLA Handbook (7th Edition):
Kumar, Mohan Kumar. “Taming latency in data center applications.” 2019. Web. 04 Mar 2021.
Vancouver:
Kumar MK. Taming latency in data center applications. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/61693.
Council of Science Editors:
Kumar MK. Taming latency in data center applications. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61693
19.
Meng, Wei.
Identifying and mitigating threats from embedding third-party content.
Degree: PhD, Computer Science, 2017, Georgia Tech
URL: http://hdl.handle.net/1853/58766
► Embedding content from third parties to enrich features is a common practice in the development of modern web applications and mobile applications. Such practices can…
(more)
▼ Embedding content from third parties to enrich features is a common practice in the development of modern web applications and mobile applications. Such practices can pose serious security and privacy threats to an end user, because sensitive data about a user in an application can be directly accessed by third-party content that usually operates with the same privilege as first-party content. The confidentiality and integrity of a user’s indirect data, such as a user profile, may also be compromised by such practices. This dissertation aims to identify new threats posed to end users by the practices of embedding third-party content and develop techniques to mitigate these threats. We first demonstrate how a malicious first-party application can either pollute or infer a user’s in- direct data in a third-party service or application by embedding it, and propose defense techniques to mitigate these two new classes of threats. We then study how over-privileged third-party JavaScript code accesses a user’s direct data in a web application in general through a large-scale measurement. This dissertation also aims to design mechanisms that enable end users and developers to limit the privilege of third-party content to prevent unintended behaviors. First, we present TrackMeOrNot, a client-side tracking control mechanism that allows end users to selectively opt out of third-party web tracking based on their demand. Second, we propose a fine- grained permission mechanism for web applications to restrict the privilege of third-party JavaScript code.
Advisors/Committee Members: Lee, Wenke (advisor), Ahamad, Mustaque (committee member), Kim, Taesoo (committee member), Vigna, Giovanni (committee member), Feamster, Nick (committee member).
Subjects/Keywords: Permission; Third-party content
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Meng, W. (2017). Identifying and mitigating threats from embedding third-party content. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/58766
Chicago Manual of Style (16th Edition):
Meng, Wei. “Identifying and mitigating threats from embedding third-party content.” 2017. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/58766.
MLA Handbook (7th Edition):
Meng, Wei. “Identifying and mitigating threats from embedding third-party content.” 2017. Web. 04 Mar 2021.
Vancouver:
Meng W. Identifying and mitigating threats from embedding third-party content. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/58766.
Council of Science Editors:
Meng W. Identifying and mitigating threats from embedding third-party content. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/58766
20.
Jung, Seungwoo.
Optimization of SiGe HBT BiCMOS analog building blocks for operation in extreme environments.
Degree: PhD, Electrical and Computer Engineering, 2015, Georgia Tech
URL: http://hdl.handle.net/1853/54419
► The objective of this research is to optimize silicon-germanium (SiGe) heterojunction bipolar transistor (HBT) BiCMOS analog circuit building blocks for operation in extreme environments utilizing…
(more)
▼ The objective of this research is to optimize silicon-germanium (SiGe) heterojunction bipolar transistor (HBT) BiCMOS analog circuit building blocks for operation in extreme environments utilizing design techniques. First, negative feedback effects on single-event transient (SET) in SiGe HBT analog circuits were investigated. In order to study the role of internal and external negative feedback effects on SET in circuits, two different types of current mirrors (a basic common-emitter current mirror and a Wilson current mirror) were fabricated using a SiGe HBT BiCMOS technology and exposed to laser-induced single events. The SET measurements were performed at the U.S. Naval Research Laboratory using a two-photon absorption (TPA) pulsed laser. The measured data showed that negative feedback improved SET response in the analog circuits; the highest peak output transient current was reduced by more than 50%, and the settling time of the output current upon a TPA laser strike was shortened with negative feedback. This proven negative feedback radiation hardening technique was applied later in the high-speed 5-bit flash analog-to-digital converter (ADC) for receiver chains of radar systems to improve SET response of the system.
Advisors/Committee Members: Peterson, Andrew F. (advisor), Kim, Taesoo (committee member), Cressler, John D. (committee member), Durgin, Gregory D. (committee member), Vela, Patricio A. (committee member).
Subjects/Keywords: Analog circuit; SEE; SET; Radiation hardening; ADC; Analog-to-digital converter; SiGe; HBT; BiCMOS
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jung, S. (2015). Optimization of SiGe HBT BiCMOS analog building blocks for operation in extreme environments. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/54419
Chicago Manual of Style (16th Edition):
Jung, Seungwoo. “Optimization of SiGe HBT BiCMOS analog building blocks for operation in extreme environments.” 2015. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/54419.
MLA Handbook (7th Edition):
Jung, Seungwoo. “Optimization of SiGe HBT BiCMOS analog building blocks for operation in extreme environments.” 2015. Web. 04 Mar 2021.
Vancouver:
Jung S. Optimization of SiGe HBT BiCMOS analog building blocks for operation in extreme environments. [Internet] [Doctoral dissertation]. Georgia Tech; 2015. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/54419.
Council of Science Editors:
Jung S. Optimization of SiGe HBT BiCMOS analog building blocks for operation in extreme environments. [Doctoral Dissertation]. Georgia Tech; 2015. Available from: http://hdl.handle.net/1853/54419
21.
Lu, Kangjie.
Securing software systems by preventing information leaks.
Degree: PhD, Computer Science, 2017, Georgia Tech
URL: http://hdl.handle.net/1853/58749
► Foundational software systems such as operating systems and web servers are implemented in unsafe programming languages for efficiency, and system designers often prioritize performance over…
(more)
▼ Foundational software systems such as operating systems and web servers are implemented in unsafe programming languages for efficiency, and system designers often prioritize performance over security. Hence, these systems inherently suffer from a variety of
vulnerabilities and insecure designs that have been exploited by adversaries to launch critical system attacks. Two typical goals of these attacks are to leak sensitive data and to control victim systems. This thesis aims to defeat both data leaks and control attacks. We first identify that, in modern systems, preventing information
leaks can be a general defense that not only stops data leaks but also defeats control attacks. We then investigate three ways to prevent information leaks: eliminating information-leak
vulnerabilities, re-designing system mechanisms against information leaks, and protecting certain sensitive data from information leaks. We have developed multiple tools for each way. While
automatically and reliably securing complex systems, all these tools impose negligible performance overhead. Our extensive
evaluation results show that preventing information leaks can be a general and practical approach to securing complex software systems.
Advisors/Committee Members: Lee, Wenke (advisor), Kim, Taesoo (advisor), Backes, Michael (committee member), Gao, Debin (committee member), Ahamad, Mustaque (committee member), Harris, William R. (committee member).
Subjects/Keywords: System security; Vulnerability; Control-flow attack; Information leak; ASLR; Re-randomization; Replicated execution; Uninitialized-data use
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lu, K. (2017). Securing software systems by preventing information leaks. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/58749
Chicago Manual of Style (16th Edition):
Lu, Kangjie. “Securing software systems by preventing information leaks.” 2017. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/58749.
MLA Handbook (7th Edition):
Lu, Kangjie. “Securing software systems by preventing information leaks.” 2017. Web. 04 Mar 2021.
Vancouver:
Lu K. Securing software systems by preventing information leaks. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/58749.
Council of Science Editors:
Lu K. Securing software systems by preventing information leaks. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/58749
22.
Merritt, Alexander Marshall.
Efficient programming of massive-memory machines.
Degree: PhD, Computer Science, 2017, Georgia Tech
URL: http://hdl.handle.net/1853/59202
► New and emerging memory technologies combined with enormous growths in data collection and mining within industry are giving rise to servers with massive pools of…
(more)
▼ New and emerging memory technologies combined with enormous growths in data collection and mining within industry are giving rise to servers with massive pools of main memory — terabytes of memory, disaggregated bandwidth across tens of sockets, and hundreds of cores. But, these systems are proving difficult to program efficiently, posing scalability challenges for all layers in the software stack, specifically in managing in-memory data sets. Larger and longer-lived data sets managed by key-value stores require minimizing over- commitments of memory, but current designs trade off performance scalability and memory bloat. Furthermore, opaque operating system abstractions like virtual memory and ill-matched, non-portable interfaces used to manipulate them make the expression of semantic relationships between applications and their data difficult: sharing in-memory data sets requires careful control over internal address mappings, but mmap, ASLR, and friends remove this control.
To explore and address these challenges, this dissertation is composed of two pieces: (1) We introduce and compare a new design for key-value stores, a multi-head log-structured allocator whose design makes explicit use of a machine’s configuration to support linear scalability of common read- and write-heavy access patterns. Our implementation of this design, called Nibble, is written in 4k lines of Rust. (2) Going beyond key-value stores, the second part of this dissertation introduces new general support within the operating system enabling applications to more explicitly manage and share pointer-based in-memory data: we introduce explicit control over address space allocation and layout by promoting address spaces as an explicit abstraction. Processes may associate with multiple address spaces, and threads may arbitrarily switch between them to access infinite data set sizes without encountering typical bottlenecks from legacy mmap interfaces. Our implementation of this design is in DragonFly BSD.
Advisors/Committee Members: Gavrilovska, Ada (advisor), Schwan, Karsten (committee member), Kim, Taesoo (committee member), Ramachandran, Umakishore (committee member), Qureshi, Moinuddin K. (committee member), Milojicic, Dejan S. (committee member).
Subjects/Keywords: Key-value store; Address space; Operating system; Big data; Memory; Scalability; Concurrency
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Merritt, A. M. (2017). Efficient programming of massive-memory machines. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/59202
Chicago Manual of Style (16th Edition):
Merritt, Alexander Marshall. “Efficient programming of massive-memory machines.” 2017. Doctoral Dissertation, Georgia Tech. Accessed March 04, 2021.
http://hdl.handle.net/1853/59202.
MLA Handbook (7th Edition):
Merritt, Alexander Marshall. “Efficient programming of massive-memory machines.” 2017. Web. 04 Mar 2021.
Vancouver:
Merritt AM. Efficient programming of massive-memory machines. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1853/59202.
Council of Science Editors:
Merritt AM. Efficient programming of massive-memory machines. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/59202
.