You searched for +publisher:"University of Texas – Austin" +contributor:("Alvisi, Lorenzo")
.
Showing records 1 – 30 of
30 total matches.

University of Texas – Austin
1.
Setty, Srinath T.V.
Toward practical argument systems for verifiable computation.
Degree: PhD, Computer Sciences, 2014, University of Texas – Austin
URL: http://hdl.handle.net/2152/28365
► How can a client extract useful work from a server without trusting it to compute correctly? A modern motivation for this classic question is third…
(more)
▼ How can a client extract useful work from a server without trusting it to compute correctly? A modern motivation for this classic question is third party computing models in which customers outsource their computations to service providers (as in cloud computing). In principle, deep results in complexity theory and cryptography imply that it is possible to verify that an untrusted entity executed a computation correctly. For instance, the server can employ probabilistically checkable proofs (PCPs) in conjunction with cryptographic commitments to generate a succinct proof of correct execution, which the client can efficiently check. However, these theoretical solutions are impractical: they require thousands of CPU years to verifiably execute even simple computations. This dissertation describes the design, implementation, and experimental evaluation viiiof a system, called Pepper, that brings this theory into the realm of plausibility. Pepper incorporates a series of algorithmic improvements and systems engineering techniques to improve performance by over 20 orders of magnitude, relative to an implementation of the theory without our refinements. These include a new probabilistically checkable proof encoding with nearly optimal asymptotics, a concise representation for computations, a more efficient cryptographic commitment primitive, and a distributed implementation of the server with GPU acceleration to reduce latency. Additionally, Pepper extends the verification machinery to handle realistic applications of third party computing: those that interact with remote storage or state (e.g., MapReduce jobs, database queries). To do so, Pepper composes techniques from untrusted storage with the aforementioned technical machinery to verifiably offload both computations and state. Furthermore, to make it easy to use this technology, Pepper includes a compiler to automatically transform programs in a subset of C into executables that run verifiably. One of the chief limitations of Pepper is that verifiable execution is still orders of magnitude slower than an unverifiable native execution. Nonetheless, Pepper takes powerful results from complexity theory and verifiable computation a few steps closer to practicality
Advisors/Committee Members: Alvisi, Lorenzo (advisor), Walfish, Michael Howard (advisor).
Subjects/Keywords: Verifiable computation; Argument systems; Outsourced computation; Proof-based verifiable computation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Setty, S. T. V. (2014). Toward practical argument systems for verifiable computation. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/28365
Chicago Manual of Style (16th Edition):
Setty, Srinath T V. “Toward practical argument systems for verifiable computation.” 2014. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/28365.
MLA Handbook (7th Edition):
Setty, Srinath T V. “Toward practical argument systems for verifiable computation.” 2014. Web. 06 Mar 2021.
Vancouver:
Setty STV. Toward practical argument systems for verifiable computation. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2014. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/28365.
Council of Science Editors:
Setty STV. Toward practical argument systems for verifiable computation. [Doctoral Dissertation]. University of Texas – Austin; 2014. Available from: http://hdl.handle.net/2152/28365

University of Texas – Austin
2.
Xie, Chao, Ph. D.
High-performance transactional storage.
Degree: PhD, Computer science, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/42014
► Developers face a fundamental tension between performance and ease of programming when building complex applications. On the one hand, by freeing developers from having to…
(more)
▼ Developers face a fundamental tension between performance and ease of programming when building complex applications. On the one hand, by freeing developers from having to worry about concurrency and failures, ACID transactions can greatly simplify the task of building applications. These benefits, however, have a cost: in a distributed setting, ACID transactions can limit performance and make systems hard to scale in the presence of contention. On the other hand, the BASE approach can achieve higher performance by providing weakened semantics. Embracing the BASE paradigm, however, exacts its own heavy price: once one renounces consistency guarantees, it is up to developers to explicitly code in their applications the logic necessary to en- sure consistency in the presence of concurrency and faults, making this task complex and error-prone.
This dissertation aims to resolve this tension by building database systems that provide both the ease of programming of the ACID approach and the performance that the BASE approach can provide. Our approach depends on the observation that different transactions affect overall performance of applications in different ways.
Traditional ACID databases ignore this diversity: they provide a single implementation of the same uniform abstraction across all transactions. This dissertation explores two different ways to leverage the previews key observation to combine performance and ease of programming, and presents two systems, Salt and Callas, inspired by them.
Advisors/Committee Members: Alvisi, Lorenzo (advisor), Witchel, Emmett (committee member), Pingali, Keshav (committee member), Aguilera, Marcos (committee member).
Subjects/Keywords: Transaction; High-performance; Distributed database
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xie, Chao, P. D. (2016). High-performance transactional storage. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/42014
Chicago Manual of Style (16th Edition):
Xie, Chao, Ph D. “High-performance transactional storage.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/42014.
MLA Handbook (7th Edition):
Xie, Chao, Ph D. “High-performance transactional storage.” 2016. Web. 06 Mar 2021.
Vancouver:
Xie, Chao PD. High-performance transactional storage. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/42014.
Council of Science Editors:
Xie, Chao PD. High-performance transactional storage. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/42014

University of Texas – Austin
3.
Crooks, Natacha.
A client-centric approach to transactional datastores.
Degree: PhD, Computer Science, 2020, University of Texas – Austin
URL: http://dx.doi.org/10.26153/tsw/8360
► Modern applications must collect and store massive amounts of data. Cloud storage offers these applications simplicity: the abstraction of a failure-free, perfectly scalable black-box. While…
(more)
▼ Modern applications must collect and store massive amounts of data. Cloud storage offers these applications simplicity: the abstraction of a failure-free, perfectly scalable black-box. While appealing, offloading data to the cloud is not without its challenges. These cloud storage systems often favour weaker levels of isolation and consistency. These weaker guarantees introduce behaviours that, without care, can break application logic. Offloading data to an untrusted third party like the cloud also raises questions of security and privacy. This thesis seeks to improve the performance, the semantics and the security of transactional cloud storage systems. It centers around a simple idea: defining consistency guarantees from the perspective of the applications that observe these guarantees, rather than from the perspective of the systems that implement them. This new perspective brings forth several benefits. First, it offers simpler and cleaner definitions of weak isolation and consistency guarantees. Second, it enables scalable implementations of existing guarantees like causal consistency. Finally, it has applications to security: it allows us to efficienctly augment transactional cloud storage systems with obliviousness guarantees
Advisors/Committee Members: Alvisi, Lorenzo (advisor), Peter, Simon, Ph. D. (advisor), Witchel, Emmett (committee member), Bailis, Peter (committee member).
Subjects/Keywords: Distributed systems; Transactional datastores; Cloud storage; Cloud storage systems; Cloud storage privacy; Cloud storage security; Cloud storage performance; Cloud storage semantics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Crooks, N. (2020). A client-centric approach to transactional datastores. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/8360
Chicago Manual of Style (16th Edition):
Crooks, Natacha. “A client-centric approach to transactional datastores.” 2020. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://dx.doi.org/10.26153/tsw/8360.
MLA Handbook (7th Edition):
Crooks, Natacha. “A client-centric approach to transactional datastores.” 2020. Web. 06 Mar 2021.
Vancouver:
Crooks N. A client-centric approach to transactional datastores. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2020. [cited 2021 Mar 06].
Available from: http://dx.doi.org/10.26153/tsw/8360.
Council of Science Editors:
Crooks N. A client-centric approach to transactional datastores. [Doctoral Dissertation]. University of Texas – Austin; 2020. Available from: http://dx.doi.org/10.26153/tsw/8360

University of Texas – Austin
4.
-6770-7949.
Bringing modular concurrency control to the next level.
Degree: PhD, Computer Science, 2019, University of Texas – Austin
URL: http://dx.doi.org/10.26153/tsw/1500
► Database users face a tension between ease-of-programming and high performance: ACID transactions can greatly simplify the programming effort of database applications by providing four useful…
(more)
▼ Database users face a tension between ease-of-programming and high performance: ACID transactions can greatly simplify the programming effort of database applications by providing four useful properties—atomicity, consistency, isolation, and durability, but enforcing these properties can degrade performance.
This dissertation eases this tension by improving the performance of ACID transactions for scenarios where data contention is the bottleneck. The approach that we take is federating concurrency control (CC) mechanisms. It is based on the observation that any single CC mechanism is bound to make trade-offs that cause it to perform well in some cases but poorly in others. A federation opens the opportunity of applying each mechanism only to the set of transactions or workloads where it shines, while maintaining isolation.
In particular, this work builds upon Modular Concurrency Control (MCC), a recent technique that federates CCs by partitioning transactions into groups, and by applying different CC mechanisms in each group.
This dissertation addresses two critical shortcomings in the current embodiment of MCC. First, cross-group data conflicts are handled with a single, unoptimized CC mechanism that can significantly limit performance. Second, configuring MCC is a complex task, which runs counter to MCC’s purpose: to improve performance without sacrificing ease-of-programming.
To address these problems, this dissertation presents Tebaldi, a new transactional database that brings Modular Concurrency Control to the next level, both figuratively and literally. Tebaldi introduces a new, hierarchical model to MCC that partitions transactions recursively to compose CC mechanisms in a multi-level tree. This model increases flexibility in federating CC mechanisms, which is the key to realizing the performance potential of federation. Tebaldi reduces configuration complexity by managing the MCC federation automatically: it can detect performance issues in the current workload in real-time, and automatically adjusts its configuration to improve its performance.
Advisors/Committee Members: Alvisi, Lorenzo (advisor), Witchel, Emmett (advisor), Rossbach, Christopher J. (committee member), Gehrke, Johannes (committee member).
Subjects/Keywords: Distributed database; Transaction; Concurrency control; Modular concurrency control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-6770-7949. (2019). Bringing modular concurrency control to the next level. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/1500
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-6770-7949. “Bringing modular concurrency control to the next level.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://dx.doi.org/10.26153/tsw/1500.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-6770-7949. “Bringing modular concurrency control to the next level.” 2019. Web. 06 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-6770-7949. Bringing modular concurrency control to the next level. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2021 Mar 06].
Available from: http://dx.doi.org/10.26153/tsw/1500.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-6770-7949. Bringing modular concurrency control to the next level. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/1500
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

University of Texas – Austin
5.
-6276-2164.
Toward practical and private online services.
Degree: PhD, Computer Science, 2018, University of Texas – Austin
URL: http://hdl.handle.net/2152/63351
► Today's common online services (social networks, media streaming, messaging, email, etc.) bring convenience. However, these services are susceptible to privacy leaks. Certainly, email snooping by…
(more)
▼ Today's common online services (social networks, media streaming, messaging,
email, etc.) bring convenience. However, these services are susceptible to
privacy leaks. Certainly, email snooping by rogue employees, email server
hacks, and accidental disclosures of user ratings for movies are some
sources of private information leakage. This dissertation investigates the
following question: Can we build systems that (a) provide strong privacy
guarantees to the users, (b) are consistent with existing commercial and policy
regimes, and (c) are affordable?
Satisfying all three requirements simultaneously is challenging, as providing
strong privacy guarantees usually necessitates either sacrificing functionality,
incurring high resource costs, or both. Indeed, there are powerful cryptographic
protocols – private information retrieval (PIR), and secure two-party
computation (2PC) – that provide strong guarantees but are orders of magnitude
more expensive than their non-private counterparts. This dissertation takes
these protocols as a starting point and then substantially reduces their costs
by tailoring them using application-specific properties. It presents two
systems, Popcorn and Pretzel, built on this design ethos.
Popcorn is a Netflix-like media delivery system, that provably hides, even from
the content distributor (for example, Netflix), which movie a user is watching.
Popcorn tailors PIR protocols to the media domain. It amortizes the server-side
overhead of PIR by batching requests from the large number of concurrent users
retrieving content at any given time; and, it forms large batches without
introducing playback delays by leveraging the properties of media streaming.
Popcorn is consistent with the prevailing commercial regime (copyrights, etc.),
and its per-request dollar cost is 3.87 times that of a non-private system.
The other system described in this dissertation, Pretzel, is an email system
that encrypts emails end-to-end between senders and intended recipients, but
allows the email service provider to perform content-based spam filtering and
targeted advertising. Pretzel refines a 2PC protocol. It reduces the resource
consumption of the protocol by replacing the underlying encryption scheme with a
more efficient one, applying a packing technique to conserve invocations of the
encryption algorithm, and pruning the inputs to the protocol. Pretzel's costs,
versus a legacy non-private implementation, are estimated to be up to 5.4 times
for the email provider, with additional but modest client-side requirements.
Popcorn and Pretzel have fundamental connections. For instance, the
cryptographic protocols in both systems securely compute vector-matrix products.
However, we observe that differences in the vector and matrix dimensions lead to
different system designs.
Ultimately, both systems represent a potentially appealing compromise: sacrifice
some functionality to build in strong privacy properties at affordable costs.
Advisors/Committee Members: Alvisi, Lorenzo (advisor), Peter, Simon (committee member), Walfish, Michael (committee member), Witchel, Emmett (committee member).
Subjects/Keywords: Private media delivery; Encrypted email; Private information retrieval; Secure two-party computation; Linear classifiers
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-6276-2164. (2018). Toward practical and private online services. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/63351
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-6276-2164. “Toward practical and private online services.” 2018. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/63351.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-6276-2164. “Toward practical and private online services.” 2018. Web. 06 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-6276-2164. Toward practical and private online services. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2018. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/63351.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-6276-2164. Toward practical and private online services. [Doctoral Dissertation]. University of Texas – Austin; 2018. Available from: http://hdl.handle.net/2152/63351
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

University of Texas – Austin
6.
-6848-2988.
Exploiting leakage in privacy-protecting systems.
Degree: PhD, Computer science, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/45559
► Conventional systems store data unencrypted. This allows them to easily access and manipulate their data. However, by not protecting their data, these systems are at…
(more)
▼ Conventional systems store data unencrypted. This allows them to easily access and manipulate their data. However, by not protecting their data, these systems are at a greater risk if they are compromised by a malicious hacker. More advanced systems add encryption to their data, but this causes other issues. Normal encryption often ruins the ability to run computations on data, negating many of the reasons to store the data in the first place. More recently, some systems have attempted to strike a compromise between security and functionality by using encryption that partially protects their data while still allowing certain operations to be performed. Examples of these systems include general purpose frameworks like Mylar for Web applications, as well as domain- and application-specific systems like P3 for photo storage. This dissertation examines the privacy concerns that arise when using these systems with realistic datasets and real-world usage scenarios. The first system we explore is Mylar, an extension to the popular Meteor framework. Meteor is a JavaScript-based framework for concurrently developing the client and server parts of Web apps. Mylar allows users to share and search over data while protecting against a compromised or malicious server. We expand Mylar's vague definitions of passive and active adversaries into three threat models and show that Mylar is insecure against all three models. Mylar's metadata leaks sensitive information to an adversary with one-time access to Mylar's encrypted database. Mylar provides no protection against adversaries which can monitor user access patterns, allowing them to watch for data dependent behavior corresponding to sensitive information. Finally, Mylar fails to protect against active attackers who, by nature of the system, have been given the ability to modify the database and run search over the encrypted data. We next look at set of systems designed to protect sensitive images by selectively obfuscating them. We examine a system called P3 which splits an image into two images: a secret image that contains most of the identifying information and a public image that can be distributed with less risk of leaking information. We also investigate mosaicing (often called pixelation) and blurring, two commonly used image obfuscation techniques. Examining the obfuscated images, it's obvious that all three of these systems leak information. However, it's not clear how to exploit this leakage or if doing so is even possible. The authors of P3 specifically examined P3 using a number of techniques that mimic human image recognition. We bypass the need for human recognition by making use of modern machine learning techniques. Using neural networks, we are able to classify the obfuscated image content automatically without needing human assistance or having to define image features. Finally, we conclude by proposing a number of guidelines for creating modern privacy-preserving systems. We look at problems that arise when creating a scheme on paper as well as issues that…
Advisors/Committee Members: Gouda, Mohamed G., 1947- (advisor), Shmatikov, Vitaly (advisor), Alvisi, Lorenzo (committee member), Witchel, Emmett (committee member).
Subjects/Keywords: Security; Privacy
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-6848-2988. (2016). Exploiting leakage in privacy-protecting systems. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/45559
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-6848-2988. “Exploiting leakage in privacy-protecting systems.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/45559.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-6848-2988. “Exploiting leakage in privacy-protecting systems.” 2016. Web. 06 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-6848-2988. Exploiting leakage in privacy-protecting systems. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/45559.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-6848-2988. Exploiting leakage in privacy-protecting systems. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/45559
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
7.
Wong, Edmund Liangfei.
Raising the BAR in dependable cooperative services.
Degree: PhD, Computer Science, 2013, University of Texas – Austin
URL: http://hdl.handle.net/2152/21358
► Cooperative services – a term which includes any system that relies on the resources and participation of its clients to function – have proven to be a…
(more)
▼ Cooperative services – a term which includes any system that relies on the resources and participation of its clients to function – have proven to be a popular, naturally scalable means to disseminate content, distribute computational workloads, or provide network connectivity. However, because these services critically depend on participants that are not controlled by a single administrative domain, these services must be designed to function in environments where no participant – because of failure or selfishness – will necessarily follow the specified protocol. This thesis addresses the challenge of establishing and maintaining cooperation in cooperative services by (1) advancing our understanding of the limits to what our services can guarantee in the presence of failure, (2) demonstrating the critical role that correct participants can play in the incentives provided by the service, and (3) proposing a new notion of equilibrium that, unlike traditional notions, provides both rigorous yet practical guarantees in the presence of collusion. Furthermore, we demonstrate that our ideas can be applied to practice by designing and implementing Seer, a system that provides a scalable, reliable, and robust method for disseminating content even if participants may fail arbitrarily or deviate selfishly as a coalition.
Advisors/Committee Members: Alvisi, Lorenzo (advisor).
Subjects/Keywords: Cooperative services; P2P; Game theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wong, E. L. (2013). Raising the BAR in dependable cooperative services. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/21358
Chicago Manual of Style (16th Edition):
Wong, Edmund Liangfei. “Raising the BAR in dependable cooperative services.” 2013. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/21358.
MLA Handbook (7th Edition):
Wong, Edmund Liangfei. “Raising the BAR in dependable cooperative services.” 2013. Web. 06 Mar 2021.
Vancouver:
Wong EL. Raising the BAR in dependable cooperative services. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2013. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/21358.
Council of Science Editors:
Wong EL. Raising the BAR in dependable cooperative services. [Doctoral Dissertation]. University of Texas – Austin; 2013. Available from: http://hdl.handle.net/2152/21358
8.
Kapritsos, Emmanouil.
Replicating multithreaded services.
Degree: PhD, Computer Sciences, 2014, University of Texas – Austin
URL: http://hdl.handle.net/2152/28366
► For the last 40 years, the systems community has invested a lot of effort in designing techniques for building fault tolerant distributed systems and services.…
(more)
▼ For the last 40 years, the systems community has invested a lot of effort in designing techniques for building fault tolerant distributed systems and services. This effort has produced a massive list of results: the literature describes how to design replication protocols that tolerate a wide range of failures (from simple crashes to malicious "Byzantine" failures) in a wide range of settings (e.g. synchronous or asynchronous communication, with or without stable storage), optimizing various metrics (e.g. number of messages, latency, throughput). These techniques have their roots in ideas, such as the abstraction of State Machine Replication and the Paxos protocol, that were conceived when computing was very different than it is today: computers had a single core; all processing was done using a single thread of control, handling requests sequentially; and a collection of 20 nodes was considered a large distributed system. In the last decade, however, computing has gone through some major paradigm shifts, with the advent of multicore architectures and large cloud infrastructures. This dissertation explains how these profound changes impact the practical usefulness of traditional fault tolerant techniques and proposes new ways to architect these solutions to fit the new paradigms.
Advisors/Committee Members: Alvisi, Lorenzo (advisor).
Subjects/Keywords: Fault tolerance; Replication; Multithreading
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kapritsos, E. (2014). Replicating multithreaded services. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/28366
Chicago Manual of Style (16th Edition):
Kapritsos, Emmanouil. “Replicating multithreaded services.” 2014. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/28366.
MLA Handbook (7th Edition):
Kapritsos, Emmanouil. “Replicating multithreaded services.” 2014. Web. 06 Mar 2021.
Vancouver:
Kapritsos E. Replicating multithreaded services. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2014. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/28366.
Council of Science Editors:
Kapritsos E. Replicating multithreaded services. [Doctoral Dissertation]. University of Texas – Austin; 2014. Available from: http://hdl.handle.net/2152/28366
9.
-6879-6322.
Algorithms for dynamic and distributed networks : shortest paths, betweenness centrality and related problems.
Degree: PhD, Computer Science, 2018, University of Texas – Austin
URL: http://hdl.handle.net/2152/63353
► In this thesis we study the problem of computing Betweenness Centrality in dynamic and distributed networks. Betweenness Centrality (BC) is a well-known measure for the…
(more)
▼ In this thesis we study the problem of computing Betweenness Centrality in dynamic and distributed networks. Betweenness Centrality (BC) is a well-known measure for the relative importance of a node in a social network. It is widely used in applications such as understanding lethality in biological networks, identifying key actors in terrorist networks, supply chain management processes and more. The necessity of computing BC in large networks, especially when they quickly change their topology over time, motivates the study of dynamic algorithms that can perform faster than static ones. Moreover, the current techniques for computing BC requires a deeper understanding of a classic problem in computer science: computing all pairs all shortest paths (APASP) in a graph. One of the main contributions of this thesis is a collection of dynamic algorithms for computing APASP and BC scores which are provably faster than static algorithms for several classes of graphs. We use n = |V| and m = |E| to indicate respectively the number of nodes and edges in a directed positively weighted graph G = (V,E). Our bounds depend on the parameter [nu]* that is defined as the maximum number of edges that lie on shortest paths through any single vertex. The main results in the first part of this thesis are listed below.
- A decrease-only algorithm for computing BC and APASP running in time O([nu]* n) that is provably faster than recomputing from scratch in sparse graphs.
- An increase-only algorithm for computing BC and APASP that runs in O([nu]*² log n) per update for a sequence of at least [Omega](m*/[nu]*) updates. Here m* is the number of edges in G that lie on shortest paths. This algorithm uses O(m* [nu]*) space.
- An increase-only algorithm for computing BC and APASP that runs in O([nu]*² log n) but improves the computational space to O(m*n).
- A fully dynamic algorithm for computing BC and APASP that runs in O([nu]*² log³ n) amortized time per update for a sequence of at least [Omega](n) updates.
- A refinement of our fully dynamic algorithm that improves the amortized running time to O([nu]*² log² n), saving a logarithmic factor.
In the second part of this thesis, we study the case when the input graph is a distributed network of machines and the BC score of each machine, considering its location within the network topology, needs to be computed. In this scenario, each node in the input graph is a self-contained machine with limited knowledge of the network and communication power. Each machine only knows the (virtual) location of the neighbors machines (adjacent nodes in the input graph). The messages, exchanged in each round between machines, cannot exceed a bounded size of at most O(log n) bits. In this distributed model, called CONGEST, we present algorithms for computing BC in near-optimal time for unweighted networks, and some classes of weighted networks. Specifically, our main results are:
- A distributed BC algorithm for unweighted undirected graphs completing in at most min(2n+O(D[underscore u]); 4n) rounds,…
Advisors/Committee Members: Ramachandran, Vijaya (advisor), Alvisi, Lorenzo (committee member), Nasre, Meghana (committee member), Plaxton, C. Greg (committee member), Zuckerman, David (committee member).
Subjects/Keywords: Betweenness centrality; Dynamic algorithms; Distributed algorithms
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-6879-6322. (2018). Algorithms for dynamic and distributed networks : shortest paths, betweenness centrality and related problems. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/63353
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-6879-6322. “Algorithms for dynamic and distributed networks : shortest paths, betweenness centrality and related problems.” 2018. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/63353.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-6879-6322. “Algorithms for dynamic and distributed networks : shortest paths, betweenness centrality and related problems.” 2018. Web. 06 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-6879-6322. Algorithms for dynamic and distributed networks : shortest paths, betweenness centrality and related problems. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2018. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/63353.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-6879-6322. Algorithms for dynamic and distributed networks : shortest paths, betweenness centrality and related problems. [Doctoral Dissertation]. University of Texas – Austin; 2018. Available from: http://hdl.handle.net/2152/63353
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

University of Texas – Austin
10.
-8499-7554.
Galois : a system for parallel execution of irregular algorithms.
Degree: PhD, Computer Sciences, 2015, University of Texas – Austin
URL: http://hdl.handle.net/2152/30536
► A programming model which allows users to program with high productivity and which produces high performance executions has been a goal for decades. This dissertation…
(more)
▼ A programming model which allows users to program with high productivity and which produces high performance executions has been a goal for decades. This dissertation makes progress towards this elusive goal by describing the design and implementation of the Galois system, a parallel programming model for shared-memory, multicore machines. Central to the design is the idea that scheduling of a program can be decoupled from the core computational operator and data structures. However, efficient programs often require application-specific scheduling to achieve best performance. To bridge this gap, an extensible and abstract scheduling policy language is proposed, which allows programmers to focus on selecting high-level scheduling policies while delegating the tedious task of implementing the policy to a scheduler synthesizer and runtime system. Implementations of deterministic and prioritized scheduling also are described. An evaluation of a well-studied benchmark suite reveals that factoring programs into operators, schedulers and data structures can produce significant performance improvements over unfactored approaches. Comparison of the Galois system with existing programming models for graph analytics shows significant performance improvements, often orders of magnitude more, due to (1) better support for the restrictive programming models of existing systems and (2) better support for more sophisticated algorithms and scheduling, which cannot be expressed in other systems.
Advisors/Committee Members: Pingali, Keshav (advisor), Alvisi, Lorenzo (committee member), Lethin, Richard (committee member), Padua, David (committee member), Witchel, Emmett (committee member).
Subjects/Keywords: Parallel programming; Multicore
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-8499-7554. (2015). Galois : a system for parallel execution of irregular algorithms. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/30536
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-8499-7554. “Galois : a system for parallel execution of irregular algorithms.” 2015. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/30536.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-8499-7554. “Galois : a system for parallel execution of irregular algorithms.” 2015. Web. 06 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-8499-7554. Galois : a system for parallel execution of irregular algorithms. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2015. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/30536.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-8499-7554. Galois : a system for parallel execution of irregular algorithms. [Doctoral Dissertation]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/30536
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
11.
-5937-3237.
A new approach to detecting failures in distributed systems.
Degree: PhD, Computer science, 2015, University of Texas – Austin
URL: http://hdl.handle.net/2152/31376
► Fault-tolerant distributed systems often handle failures in two steps: first, detect the failure and, second, take some recovery action. A common approach to detecting failures…
(more)
▼ Fault-tolerant distributed systems often handle failures in two steps: first, detect the failure and, second, take some recovery action. A common approach to detecting failures is end-to-end timeouts, but using timeouts brings problems. First, timeouts are inaccurate: just because a process is unresponsive does not mean that process has failed. Second, choosing a timeout is hard: short timeouts can exacerbate the problem of inaccuracy, and long timeouts can make the system wait unnecessarily. In fact, a good timeout value—one that balances the choice between accuracy and speed—may not even exist, owing to the variance in a system’s end-to-end delays. ƃis dissertation posits a new approach to detecting failures in distributed systems: use information about failures that is local to each component, e.g., the contents of an OS’s process table. We call such information inside information, and use it as the basis in the design and implementation of three failure reporting services for data center applications, which we call Falcon, Albatross, and Pigeon. Falcon deploys a network of software modules to gather inside information in the system, and it guarantees that it never reports a working process as crashed by sometimes terminating unresponsive components. ƃis choice helps applications by making reports of failure reliable, meaning that applications can treat them as ground truth. Unfortunately, Falcon cannot handle network failures because guaranteeing that a process has crashed requires network communication; we address this problem in Albatross and Pigeon. Instead of killing, Albatross blocks suspected processes from using the network, allowing applications to make progress during network partitions. Pigeon renounces interference altogether, and reports inside information to applications directly and with more detail to help applications make better recovery decisions. By using these services, applications can improve their recovery from failures both quantitatively and qualitatively. Quantitatively, these services reduce detection time by one to two orders of magnitude over the end-to-end timeouts commonly used by data center applications, thereby reducing the unavailability caused by failures. Qualitatively, these services provide more specific information about failures, which can reduce the logic required for recovery and can help applications better decide when recovery is not necessary.
Advisors/Committee Members: Alvisi, Lorenzo (advisor), Aguilera, Marcos K (committee member), Shmatikov, Vitaly (committee member), Walfish, Michael (committee member), Witchel, Emmett (committee member).
Subjects/Keywords: Computer science; Fault tolerance; Distributed systems; Failure detection
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-5937-3237. (2015). A new approach to detecting failures in distributed systems. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/31376
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-5937-3237. “A new approach to detecting failures in distributed systems.” 2015. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/31376.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-5937-3237. “A new approach to detecting failures in distributed systems.” 2015. Web. 06 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-5937-3237. A new approach to detecting failures in distributed systems. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2015. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/31376.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-5937-3237. A new approach to detecting failures in distributed systems. [Doctoral Dissertation]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/31376
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

University of Texas – Austin
12.
-2779-6876.
Secure protocols for contactless credit cards and electronic wallets.
Degree: PhD, Computer Science, 2018, University of Texas – Austin
URL: http://hdl.handle.net/2152/63350
► The contactless credit card protocol in use today is insecure. The credit card industry has chosen to use the NFC channel for contactless transactions. However,…
(more)
▼ The contactless credit card protocol in use today is insecure. The credit card industry has chosen to use the NFC channel for contactless transactions. However, reliance on NFC's short range has led to poor assumptions in the contactless credit card protocol. For example, the card assumes (sometimes incorrectly) that its ability to receive a solicitation implies the cardholder's intent to purchase. In this dissertation, we examine the protocol currently in use, and present a family of three replacement protocols to defend against its deficiencies.
First, we consider "outsider" attacks (e.g. eavesdropping, skimming attacks, relay attacks, and attacks facilitated by compromised points of sale) and design our first protocol to defend against these attacks. We call this protocol the Externally Secure CC Protocol, and design it using stepwise refinement. This protocol makes use of single-use "charge tokens" verifiable by the bank, while minimizing computation that needs to occur on the card.
Second, we identify two attacks which may be carried out by malicious retailers: Over-charge attacks and Transparent Bridge attacks. Both attacks are predicated on the customer's lack of participation in the protocol, and involve modifying or replacing a charge after it has been confirmed by the customer. We look to Electronic Wallet applications (such as Android Pay and Apple Wallet), which provide a channel between customer and card. We augment the Externally Secure CC Protocol using this channel to construct the Secure CC Protocol, binding charge tokens to a given price, and thus stymieing both outsider and malicious retailer attacks.
The Secure CC Protocol supports a property known as linkability: while only the bank can verify charge tokens, tokens from the same card can be recognized as such by the retailer. This property is also supported by the (insecure) protocol in use today, and is commonly used by retailers to construct marketing profiles on their customers. However, linkability has serious consumer privacy consequences, so we consider the converse property of unlinkability, where a retailer cannot identify different purchases as having been made by the same card. We require that our unlinkable protocol make use of existing infrastructure, so as not to require retailer cooperation. In response, we design the Unlinkable Wallet Protocol, leveraging techniques from the Secure CC Protocol to guard against malicious outsiders and retailers, while tunneling secure and unlinkable charge tokens through the protocol in use today.
Advisors/Committee Members: Gouda, Mohamed G., 1947- (advisor), Alvisi, Lorenzo (committee member), Qiu, Lili (committee member), Garg, Vijay K (committee member).
Subjects/Keywords: Security; Privacy; Credit cards; Payments; Transactions; Nfc; Rfid; Proximity; Electronic wallet; Unlinkability; Authentication
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-2779-6876. (2018). Secure protocols for contactless credit cards and electronic wallets. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/63350
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-2779-6876. “Secure protocols for contactless credit cards and electronic wallets.” 2018. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/63350.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-2779-6876. “Secure protocols for contactless credit cards and electronic wallets.” 2018. Web. 06 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-2779-6876. Secure protocols for contactless credit cards and electronic wallets. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2018. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/63350.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-2779-6876. Secure protocols for contactless credit cards and electronic wallets. [Doctoral Dissertation]. University of Texas – Austin; 2018. Available from: http://hdl.handle.net/2152/63350
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

University of Texas – Austin
13.
Li, Xin, Ph. D.
Distributed computing and cryptography with general weak random sources.
Degree: PhD, Computer Science, 2011, University of Texas – Austin
URL: http://hdl.handle.net/2152/30361
► The use of randomness in computer science is ubiquitous. Randomized protocols have turned out to be much more efficient than their deterministic counterparts. In addition,…
(more)
▼ The use of randomness in computer science is ubiquitous. Randomized protocols have turned out to be much more efficient than their deterministic counterparts. In addition, many problems in distributed computing and cryptography are impossible to solve without randomness. However, these applications typically require uniform random bits, while in practice almost all natural random phenomena are biased. Moreover, even originally uniform random bits can be damaged if an adversary learns some partial information about these bits. In this thesis, we study how to run randomized protocols in distributed computing and cryptography with imperfect randomness. We use the most general model for imperfect randomness where the weak random source is only required to have a certain amount of min-entropy. One important tool here is the randomness extractor. A randomness extractor is a function that takes as input one or more weak random sources, and outputs a distribution that is close to uniform in statistical distance. Randomness extractors are interesting in their own right and are closely related to many other problems in computer science. Giving efficient constructions of randomness extractors with optimal parameters is one of the major open problems in the area of pseudorandomness. We construct network extractor protocols that extract private random bits for parties in a communication network, assuming that they each start with an independent weak random source, and some parties are corrupted by an adversary who sees all communications in the network. These protocols imply fault-tolerant distributed computing protocols and secure multi-party computation protocols where only imperfect randomness is available. The probabilistic method shows that there exists an extractor for two independent sources with logarithmic min-entropy, while known constructions are far from achieving these parameters. In this thesis we construct extractors for two independent sources with any linear min-entropy, based on a computational assumption. We also construct the best known extractors for three independent sources and affine sources. Finally we study the problem of privacy amplification. In this model, two parties share a private weak random source and they wish to agree on a private uniform random string through communications in a channel controlled by an adversary, who has unlimited computational power and can change the messages in arbitrary ways. All previous results assume that the two parties have local uniform random bits. We show that this problem can be solved even if the two parties only have local weak random sources. We also improve previous results in various aspects by constructing the first explicit non-malleable extractor and giving protocols based on this extractor.
Advisors/Committee Members: Zuckerman, David I. (advisor), Alvisi, Lorenzo (committee member), Kalai, Yael (committee member), Klivans, Adam (committee member), Waters, Brent (committee member).
Subjects/Keywords: Randomness; Weak random source; Extractor; Distributed computing; Cryptography
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, Xin, P. D. (2011). Distributed computing and cryptography with general weak random sources. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/30361
Chicago Manual of Style (16th Edition):
Li, Xin, Ph D. “Distributed computing and cryptography with general weak random sources.” 2011. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/30361.
MLA Handbook (7th Edition):
Li, Xin, Ph D. “Distributed computing and cryptography with general weak random sources.” 2011. Web. 06 Mar 2021.
Vancouver:
Li, Xin PD. Distributed computing and cryptography with general weak random sources. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2011. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/30361.
Council of Science Editors:
Li, Xin PD. Distributed computing and cryptography with general weak random sources. [Doctoral Dissertation]. University of Texas – Austin; 2011. Available from: http://hdl.handle.net/2152/30361

University of Texas – Austin
14.
Sperisen, Benjamin Leonard.
Essays on reputation and repeated games.
Degree: PhD, Economics, 2015, University of Texas – Austin
URL: http://hdl.handle.net/2152/30940
► This dissertation consists of three essays on reputation and repeated games. Reputation models typically assume players have full memory of past events, yet in many…
(more)
▼ This dissertation consists of three essays on reputation and repeated games. Reputation models typically assume players have full memory of past events, yet in many applications this assumption does not hold. In the first chapter, I explore two different relaxations of the assumption that history is perfectly observed in the context of Ely and Välimäki's (2003) mechanic game, where reputation (with full history observation) is clearly bad for all players. First I consider "limited history," where short-run players see only the most recent T periods. For large T, the full history equilibrium behavior always holds due to an "echo" effect (for high discount factors); for small T, the repeated static equilibrium exists. Second I consider "fading history," where short-run players randomly sample past periods with probabilities that "fade" toward zero for older periods. When fading is faster than a fairly lax threshold, the long-run player always acts myopically, a result that holds more generally for reputation games where the long-run player has a strictly dominant stage game action. This finding suggests that reputational incentives may be too weak to affect long-run player behavior in some realistic word-of-mouth environments. The second chapter develops general theoretical tools to study incomplete information games where players observe only finitely many recent periods. I derive a recursive characterization of the set of equilibrium payoffs, which allows analysis of both stationary and (previously unexplored) non-stationary equilibria. I also introduce "quasi-Markov perfection," an equilibrium refinement which is a necessary condition of any equilibrium that is "non-fragile" (purifiable), i.e., robust to small, additively separable and independent perturbations of payoffs. These tools are applied to two examples. The first is a product choice game with 1-period memory of the firm's actions, obtaining a complete characterization of the exact minimum and maximum purifiable equilibrium payoffs for almost all discount factors and prior beliefs on an "honest" Stackelberg commitment type, which shows that non-stationary equilibria expand the equilibrium set. The second is the same game with long memory: in all stationary and purifiable equilibria, the long-run player obtains exactly the Stackelberg payoff so long as the memory is longer than a threshold dependent on the prior. These results show that the presence of the honest type (even for arbitrarily small prior beliefs) qualitatively changes the equilibrium set for any fixed discount factor above a threshold independent of the prior, thereby not requiring extreme patience. The third chapter studies the question of why drug trafficking organizations inflict violence on each other, and why conflict breaks out under some government crackdowns and not others, in a repeated games context. Violence between Mexican drug cartels soared following the government's anti-cartel offensive starting in 2006, but not under previous crackdowns. I construct a theoretical explanation…
Advisors/Committee Members: Wiseman, Thomas E., 1974- (advisor), Alvisi, Lorenzo (committee member), Bhaskar, V. (committee member), Stahl, Dale (committee member), Thomas, Caroline (committee member).
Subjects/Keywords: Reputation; Repeated games; Mechanic game; Bounded memory; Purifiability; Drug wars; Cartels
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sperisen, B. L. (2015). Essays on reputation and repeated games. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/30940
Chicago Manual of Style (16th Edition):
Sperisen, Benjamin Leonard. “Essays on reputation and repeated games.” 2015. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/30940.
MLA Handbook (7th Edition):
Sperisen, Benjamin Leonard. “Essays on reputation and repeated games.” 2015. Web. 06 Mar 2021.
Vancouver:
Sperisen BL. Essays on reputation and repeated games. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2015. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/30940.
Council of Science Editors:
Sperisen BL. Essays on reputation and repeated games. [Doctoral Dissertation]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/30940

University of Texas – Austin
15.
Pu, Xiao.
A bandwidth-enhanced fractional-N PLL through reference multiplication.
Degree: PhD, Electrical and Computer Engineering, 2011, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2011-08-4149
► The loop bandwidth of a fractional-N PLL is a desirable parameter for many applications. A wide bandwidth allows a significant attenuation of phase noise arising…
(more)
▼ The loop bandwidth of a fractional-N PLL is a desirable parameter for many
applications. A wide bandwidth allows a significant attenuation of phase noise arising
from the VCO. A good VCO typically requires a high Q LC oscillator. It is difficult to
build an on-chip inductor with a high Q factor. In addition, a good VCO also requires a
lot of power. Both these design challenges are relaxed with a wide loop bandwidth PLL.
However a wide loop bandwidth reduces the effective oversampling ratio (OSR) between
the update rate and loop bandwidth and makes quantization noise from the ΔΣ modulator
a much bigger noise
contributor. A wide band loop also makes the noise and linearity
performance of the phase detector more significant. The key to successful
implementation of a wideband fractional-N synthesizer is in managing jitter and spurious
performance. In this dissertation we present a new PLL architecture for bandwidth
extension or phase noise reduction. By using clock squaring buffers with built-in offsets,
multiple clock edges are extracted from a single cycle of a sinusoidal reference and used
for phase updates, effectively forming a reference frequency multiplier. A higher update rate enables a higher OSR which allows for better quantization noise shaping and makes
a wideband fractional-N PLL possible. However since the proposed reference multiplier
utilizes the magnitude information from a sinusoidal reference to obtain phases, the
derived new edges tend to cluster around the zero-crossings and form an irregular clock.
This presents a challenge in lock acquisition. We have demonstrated for the first time that
an irregular clock can be used to lock a PLL. The irregularity of the reference clock is
taken into account in the divider by adding a cyclic divide pattern along with the ΔΣ
control bits, this forces the loop to locally match the incoming patterns and achieve lock.
Theoretically this new architecture allows for a 6x increase in loop BW or a 24dB
improvement in phase noise. One potential issue associated with the proposed approach
is the degraded spurious performance due to PVT variations, which lead to unintended
mismatches between the irregular period and the divider pattern. A calibration scheme
was invented to overcome this issue. In simulation, the calibration scheme was shown to
lower the spurs down to inherent spurs level, of which the total energy is much less than
the integrated phase noise. A test chip for proof of concept is presented and
measurements are carefully analyzed.
Advisors/Committee Members: Thomsen, Axel (advisor), Abraham, Jacob A. (advisor), Hassibi, Arjang (committee member), Orshansky, Michael (committee member), Pan, David (committee member), Alvisi, Lorenzo (committee member).
Subjects/Keywords: PLL; Fractional-N; Bandwidth; Spur
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Pu, X. (2011). A bandwidth-enhanced fractional-N PLL through reference multiplication. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2011-08-4149
Chicago Manual of Style (16th Edition):
Pu, Xiao. “A bandwidth-enhanced fractional-N PLL through reference multiplication.” 2011. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/ETD-UT-2011-08-4149.
MLA Handbook (7th Edition):
Pu, Xiao. “A bandwidth-enhanced fractional-N PLL through reference multiplication.” 2011. Web. 06 Mar 2021.
Vancouver:
Pu X. A bandwidth-enhanced fractional-N PLL through reference multiplication. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2011. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/ETD-UT-2011-08-4149.
Council of Science Editors:
Pu X. A bandwidth-enhanced fractional-N PLL through reference multiplication. [Doctoral Dissertation]. University of Texas – Austin; 2011. Available from: http://hdl.handle.net/2152/ETD-UT-2011-08-4149

University of Texas – Austin
16.
Kim, Sangman.
Networking abstractions for GPU programs.
Degree: PhD, Computer Sciences, 2015, University of Texas – Austin
URL: http://hdl.handle.net/2152/46765
► Graphics Processing Units (GPUs) are becoming major general-purpose computing hardware for high-performance parallel computation. Despite their general computation capability and impressive performance, GPUs still lack…
(more)
▼ Graphics Processing Units (GPUs) are becoming major general-purpose computing hardware for high-performance parallel computation. Despite their general computation capability and impressive performance, GPUs still lack important operating system (OS) services like networking, which makes building networking services or distributed systems on GPUs challenging. This thesis presents GPUnet, a native GPU networking layer that provides a socket abstraction and high-level networking APIs for GPU programs. GPUnet abstracts complicated coordination of processors and network interfaces from GPU applications, and streamlines the development of server applications on GPUs. We develop several applications that harness the benefit of GPUnet: our matrix multiplication server with GPUnet's performance matches or surpasses the performance of the server without GPUnet, with only 24-43% of lines of code. We also show the scalability of in-GPU-memory MapReduce (GimMR) applications across multiple GPUs. Its word count and K-means workloads can scale to four GPUs with speedups of 2.9-3.5x over one GPU. GPUnet addresses three key challenges: massive parallelism of GPU programs, memory copy overhead between CPU memory and GPU memory, and slow single-threaded performance of GPUs. To better support massively parallel GPU programs, the networking API invocations from multiple threads at the same point in a data-parallel code are coalesced. Direct communication between GPUs and the network devices reduces the copy overhead, and, to minimize the amount of time spent in the single-threaded operation, the control-intensive networking operations are offloaded to the network device.
Advisors/Committee Members: Witchel, Emmett (advisor), Alvisi, Lorenzo (committee member), Ford, Bryan (committee member), Pingali, Keshav (committee member), Porter, Donald E. (committee member), Silberstein, Mark (committee member).
Subjects/Keywords: GPU; Operating systems; Network; Socket; RDMA; CUDA; Infiniband
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kim, S. (2015). Networking abstractions for GPU programs. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/46765
Chicago Manual of Style (16th Edition):
Kim, Sangman. “Networking abstractions for GPU programs.” 2015. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/46765.
MLA Handbook (7th Edition):
Kim, Sangman. “Networking abstractions for GPU programs.” 2015. Web. 06 Mar 2021.
Vancouver:
Kim S. Networking abstractions for GPU programs. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2015. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/46765.
Council of Science Editors:
Kim S. Networking abstractions for GPU programs. [Doctoral Dissertation]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/46765
17.
-8037-0201.
Formal verification of application and system programs based on a validated x86 ISA model.
Degree: PhD, Computer science, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/46437
► Two main kinds of tools available for formal software verification are point tools and general-purpose tools. Point tools are targeted towards bug-hunting or proving a…
(more)
▼ Two main kinds of tools available for formal software verification are point tools and general-purpose tools. Point tools are targeted towards bug-hunting or proving a fixed set of properties, such as establishing the absence of buffer overflows. These tools have become a practical choice in the development and analysis of serious software systems, largely because they are easy to use. However, point tools are limited in their scope because they are pre-programmed to reason about a fixed set of behaviors. In contrast, general-purpose tools,like theorem provers, have a wider scope. Unfortunately, they also have a higher user overhead. These tools often use incomplete and/or unrealistic software models, in part to reduce this overhead. Formal verification based on such a model can be restrictive because it does not account for program behaviors that rely on missing features in the model. The results of such formal verification undertakings may be unreliable – consequently, they can offer a false sense of security. This dissertation demonstrates that formal verification of complex program properties can be made practical, without any loss of accuracy or expressiveness, by employing a machine-code analysis framework implemented using a mechanical theorem prover. To this end, we constructed a formal and executable model of the x86 Instruction-Set Architecture using the ACL2 theorem-proving system. This model includes a specification of 400+ x86 opcodes and architectural features like segmentation and paging. The model's high execution speed allows it to be validated routinely by performing co-simulations against a physical x86 processor – thus, formal analysis based on this model is reliable. We also developed a general framework for x86 machine-code analysis that can lower the overhead associated with the verification of a broad range of program properties, including correctness with respect to behavior, security, and resource requirements. We illustrate the capabilities of our framework by describing the verification of two application programs, population count and word count, and one system program, zero copy.
Advisors/Committee Members: Hunt, Warren A., 1958- (advisor), Alvisi, Lorenzo (committee member), Kaufmann, Matt (committee member), Lin, Calvin (committee member), Moore, J S (committee member), Watson, Robert (committee member).
Subjects/Keywords: Formal verification; x86 Machine-code analysis; x86 ISA; ACL2; Theorem proving; Program verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-8037-0201. (2016). Formal verification of application and system programs based on a validated x86 ISA model. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/46437
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-8037-0201. “Formal verification of application and system programs based on a validated x86 ISA model.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/46437.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-8037-0201. “Formal verification of application and system programs based on a validated x86 ISA model.” 2016. Web. 06 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-8037-0201. Formal verification of application and system programs based on a validated x86 ISA model. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/46437.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-8037-0201. Formal verification of application and system programs based on a validated x86 ISA model. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/46437
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
18.
-3461-6084.
UbiPAL : secure messaging and access control for ubiquitous computing.
Degree: MSin Computer Sciences, Computer Science, 2015, University of Texas – Austin
URL: http://hdl.handle.net/2152/31851
► The ubiquitous computing environment and modern trends in personal computing, such as body sensor networks and smart houses, create unique challenges in privacy and access…
(more)
▼ The ubiquitous computing environment and modern trends in personal computing, such as body sensor networks and smart houses, create unique challenges in privacy and access control. Lack of centralized computing and the dynamic nature of human environments and access rules render most access control systems insufficient for this new category of systems. UbiPAL is an object-oriented communication framework for ubiquitous systems which provides secure communication and decentralized access control. UbiPAL uses a modified SecPAL implementation to provide reliable, ad hoc access control. The UbiPAL system uses cryptographically signed, publicly held namespace certificates and access control lists in the style of TLS certificates. This approach allows message authentication and authorization in an ad hoc, completely decentralized method while maintaining human readability of policy language. UbiPAL was implemented as a C++ library, made freely available at (1), and evaluated to have minimized overhead. Even on the slowest device evaluated, a Raspberry Pi, UbiPAL authentication and authorization adds less than 20 milliseconds to the delivery a message with a message overhead of 153 bytes. The UbiPAL programming model separates access policy from application programming and results in small amounts of code required from the application programmer, creating an accessible paradigm for programming ubiquitous computing systems.
Advisors/Committee Members: Alvisi, Lorenzo (advisor), Dickerson, Robert F. (advisor).
Subjects/Keywords: Ubiquitous computing; Mobile computing; Distributed systems; Access control; Secure messaging; Privacy; Security; Languages
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-3461-6084. (2015). UbiPAL : secure messaging and access control for ubiquitous computing. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/31851
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-3461-6084. “UbiPAL : secure messaging and access control for ubiquitous computing.” 2015. Masters Thesis, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/31851.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-3461-6084. “UbiPAL : secure messaging and access control for ubiquitous computing.” 2015. Web. 06 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-3461-6084. UbiPAL : secure messaging and access control for ubiquitous computing. [Internet] [Masters thesis]. University of Texas – Austin; 2015. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/31851.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-3461-6084. UbiPAL : secure messaging and access control for ubiquitous computing. [Masters Thesis]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/31851
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
19.
Wang, Yang, Ph. D.
Separating data from metadata for robustness and scalability.
Degree: PhD, Computer Science, 2014, University of Texas – Austin
URL: http://hdl.handle.net/2152/28370
► When building storage systems that aim to simultaneously provide robustness, scalability, and efficiency, one faces a fundamental tension, as higher robustness typically incurs higher costs…
(more)
▼ When building storage systems that aim to simultaneously provide robustness, scalability, and efficiency, one faces a fundamental tension, as higher robustness typically incurs higher costs and thus hurts both efficiency and scalability. My research shows that an approach to storage system design based on a simple principle—separating data from metadata—can yield systems that address elegantly and effectively that tension in a variety of settings. One observation motivates our approach: much of the cost paid by many strong protection techniques is incurred to detect errors. This observation suggests an opportunity: if we can build a low-cost oracle to detect errors and identify correct data, it may be possible to reduce the cost of protection without weakening its guarantees. This dissertation shows that metadata, if carefully designed, can serve as such an oracle and help a storage system protect its data with minimal cost. This dissertation shows how to effectively apply this idea in three very different systems: Gnothi—a storage replication protocol that combines the high availability of asynchronous replication and the low cost of synchronous replication for a small-scale block storage; Salus—a large-scale block storage with unprecedented guarantees in terms of consistency, availability, and durability in the face of a wide range of server failures; and Exalt—a tool to emulate a large storage system with 100 times fewer machines.
Advisors/Committee Members: Alvisi, Lorenzo (advisor), Dahlin, Michael (advisor).
Subjects/Keywords: Metadata; Robustness; Scalability; Storage; Replication; Fault tolerace
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, Yang, P. D. (2014). Separating data from metadata for robustness and scalability. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/28370
Chicago Manual of Style (16th Edition):
Wang, Yang, Ph D. “Separating data from metadata for robustness and scalability.” 2014. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/28370.
MLA Handbook (7th Edition):
Wang, Yang, Ph D. “Separating data from metadata for robustness and scalability.” 2014. Web. 06 Mar 2021.
Vancouver:
Wang, Yang PD. Separating data from metadata for robustness and scalability. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2014. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/28370.
Council of Science Editors:
Wang, Yang PD. Separating data from metadata for robustness and scalability. [Doctoral Dissertation]. University of Texas – Austin; 2014. Available from: http://hdl.handle.net/2152/28370
20.
Rebello, Rohan Francis.
Byzantine fault tolerant web applications using the UpRight library.
Degree: MA, Computer Sciences, 2009, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2009-08-362
► Web applications are widely used for email, online sales, auctions, collaboration, etc. Most of today’s highly-available web applications implement fault tolerant protocols in order to…
(more)
▼ Web applications are widely used for email, online sales, auctions, collaboration, etc. Most of today’s highly-available web applications implement fault tolerant protocols in order to tolerate crash faults. However, recent system-wide failures have been caused by arbitrary or Byzantine faults which these applications are not capable of handling. Despite the abundance of research on adding Byzantine fault tolerance (BFT) to a system, BFT systems have found little use outside the research community. Reasons typically cited for this are the difficulty in implementing such systems and the performance overhead associated with them. While most research focuses on improving the performance or lowering the replication cost of BFT protocols, little has been done on making them easy to implement.
The goal of this thesis is to evaluate the viability of BFT web applications and show that, given the right abstraction, it is viable to build a Byzantine fault tolerant web application without extensive reimplementation of the web application. In order to achieve this goal, it demonstrates a BFT implementation of the Apache Tomcat servlet container and the VQWiki web application by using the UpRight BFT library. The UpRight library provides abstractions that make it easy to develop BFT applications and we leverage
this abstraction to reduce the implementation cost of our system. Our results are encouraging — less than 2% of the original system needs to be modified while still retaining all the functionality of the original system. Given the design trade-offs that we make in implementing the system, we also get comparable performance, indicating that implementing BFT is a viable option to explore for highly-available web applications.
Advisors/Committee Members: Alvisi, Lorenzo (advisor), Dahlin, Michael (committee member).
Subjects/Keywords: fault tolerance; byzantine; web applications; UpRight library
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rebello, R. F. (2009). Byzantine fault tolerant web applications using the UpRight library. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2009-08-362
Chicago Manual of Style (16th Edition):
Rebello, Rohan Francis. “Byzantine fault tolerant web applications using the UpRight library.” 2009. Masters Thesis, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/ETD-UT-2009-08-362.
MLA Handbook (7th Edition):
Rebello, Rohan Francis. “Byzantine fault tolerant web applications using the UpRight library.” 2009. Web. 06 Mar 2021.
Vancouver:
Rebello RF. Byzantine fault tolerant web applications using the UpRight library. [Internet] [Masters thesis]. University of Texas – Austin; 2009. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/ETD-UT-2009-08-362.
Council of Science Editors:
Rebello RF. Byzantine fault tolerant web applications using the UpRight library. [Masters Thesis]. University of Texas – Austin; 2009. Available from: http://hdl.handle.net/2152/ETD-UT-2009-08-362

University of Texas – Austin
21.
Napper, Jeffrey Michael.
Robust multithreaded applications.
Degree: PhD, Computer Sciences, 2008, University of Texas – Austin
URL: http://hdl.handle.net/2152/3966
► This thesis discusses techniques for improving the fault tolerance of multithreaded applications. We consider the impact on fault tolerance methods of sharing address space and…
(more)
▼ This thesis discusses techniques for improving the fault tolerance of multithreaded applications. We consider the impact on fault tolerance methods of sharing address space and resources. We develop techniques in two broad categories: conservative multithreaded fault-tolerance (C-MTFT), which recovers an entire application on the failure of a single thread, and optimistic multithreaded fault-tolerance (OMTFT), which recovers threads independently as necessary. In the latter category, we provide a novel approach to recover hung threads while improving recovery time by managing access to shared resources so that hung threads can be restarted while other threads continue execution.
Advisors/Committee Members: Alvisi, Lorenzo (advisor).
Subjects/Keywords: Simultaneous multithreading processors; Fault-tolerant computing; Computer storage devices
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Napper, J. M. (2008). Robust multithreaded applications. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/3966
Chicago Manual of Style (16th Edition):
Napper, Jeffrey Michael. “Robust multithreaded applications.” 2008. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/3966.
MLA Handbook (7th Edition):
Napper, Jeffrey Michael. “Robust multithreaded applications.” 2008. Web. 06 Mar 2021.
Vancouver:
Napper JM. Robust multithreaded applications. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2008. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/3966.
Council of Science Editors:
Napper JM. Robust multithreaded applications. [Doctoral Dissertation]. University of Texas – Austin; 2008. Available from: http://hdl.handle.net/2152/3966

University of Texas – Austin
22.
Martin, Jean-Philippe Etienne.
Byzantine fault-tolerance and beyond.
Degree: PhD, Computer Sciences, 2006, University of Texas – Austin
URL: http://hdl.handle.net/2152/2828
► Byzantine fault-tolerance techniques are useful because they tolerate arbitrary faults regardless of cause: bugs, hardware glitches, even hackers. These techniques have recently gained popularity after…
(more)
▼ Byzantine fault-tolerance techniques are useful because they tolerate arbitrary
faults regardless of cause: bugs, hardware glitches, even hackers. These techniques
have recently gained popularity after it was shown that they could be made
practical.
Most of the dissertation builds on Byzantine fault-tolerance (BFT) and extends
it with new results for Byzantine fault-tolerance for both quorum systems and
state machine replication. Our contributions include proving new lower bounds,
finding new protocols that meet these bounds, and providing new functionality at
lower cost through a new architecture for state machine replication.
The second part of the dissertation goes beyond Byzantine fault-tolerance.
We show that BFT techniques are not sufficient for networks that span multiple
administrative domains, propose the new BAR model to describe these environments,
and show how to build BAR-Tolerant protocols through our example of a
BAR-Tolerant terminating reliable broadcast protocol.
Advisors/Committee Members: Alvisi, Lorenzo (advisor).
Subjects/Keywords: Fault-tolerant computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Martin, J. E. (2006). Byzantine fault-tolerance and beyond. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/2828
Chicago Manual of Style (16th Edition):
Martin, Jean-Philippe Etienne. “Byzantine fault-tolerance and beyond.” 2006. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/2828.
MLA Handbook (7th Edition):
Martin, Jean-Philippe Etienne. “Byzantine fault-tolerance and beyond.” 2006. Web. 06 Mar 2021.
Vancouver:
Martin JE. Byzantine fault-tolerance and beyond. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2006. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/2828.
Council of Science Editors:
Martin JE. Byzantine fault-tolerance and beyond. [Doctoral Dissertation]. University of Texas – Austin; 2006. Available from: http://hdl.handle.net/2152/2828
23.
Li, Harry Chu-Kit.
Robust peer-to-peer systems.
Degree: PhD, Computer Sciences, 2009, University of Texas – Austin
URL: http://hdl.handle.net/2152/29627
► Peer-to-peer (p2p) approaches are an increasingly effective way to deploy services. Popular examples include BitTorrent, Skype, and KaZaA. These approaches are attractive because they can…
(more)
▼ Peer-to-peer (p2p) approaches are an increasingly effective way to deploy services. Popular examples include BitTorrent, Skype, and KaZaA. These approaches are attractive because they can be highly fault-tolerant, scalable, adaptive, and less expensive than a more centralized solution. Cooperation lies at the heart of these strengths. Yet, in settings where working together is crucial, a natural question is: "What if users stop cooperating?" After all, cooperative services are typically deployed over multiple administrative domains, and thus vulnerable to Byzantine failures and users who may act selfishly. This dissertation explores how to construct p2p systems to tolerate Byzantine participants while also incentivizing selfish participants to contribute resources. We describe how to balance obedience against choice in building a robust p2p live streaming system. Imposing obedience is desirable as it leaves little room for peers to attack or cheat the system. However, providing choice is also attractive as it allows us to engineer flexible and efficient solutions. We first focus on obedience by using Nash equilibria to drive the design of BAR Gossip, the first gossip protocol that is resilient to Byzantine and selfish nodes. BAR Gossip relies on verifiable pseudo-random partner selection to eliminate non-determinism, which can be used to game the system, while maintaining the robustness and rapid convergence of traditional gossip. A novel fair enough exchange primitive entices cooperation among selfish peers on short timescales, thereby avoiding the need for distributed reputation schemes. We next focus on tempering obedience with choice by using approximate equilibria to guide the construction of a novel p2p live streaming system. These equilibria allow us to design incentives to limit selfish behavior rigorously, yet provide sufficient flexibility to build practical systems. We show the advantages of using an [element of]-Nash equilibrium, instead of an exact Nash, to design and implement FlightPath, our live streaming system that uses bandwidth efficiently, absorbs flash crowds, adapts to sudden peer departures, handles churn, and tolerates malicious activity.
Advisors/Committee Members: Alvisi, Lorenzo (advisor).
Subjects/Keywords: Peer-to-peer systems; P2p systems; Nash equilibria; BAR Gossip
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, H. C. (2009). Robust peer-to-peer systems. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/29627
Chicago Manual of Style (16th Edition):
Li, Harry Chu-Kit. “Robust peer-to-peer systems.” 2009. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/29627.
MLA Handbook (7th Edition):
Li, Harry Chu-Kit. “Robust peer-to-peer systems.” 2009. Web. 06 Mar 2021.
Vancouver:
Li HC. Robust peer-to-peer systems. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2009. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/29627.
Council of Science Editors:
Li HC. Robust peer-to-peer systems. [Doctoral Dissertation]. University of Texas – Austin; 2009. Available from: http://hdl.handle.net/2152/29627
24.
Porter, Donald E.
Operating system transactions.
Degree: PhD, Computer Sciences, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-12-2488
► Applications must be able to synchronize accesses to operating system (OS) resources in order to ensure correctness in the face of concurrency and system failures.…
(more)
▼ Applications must be able to synchronize accesses to operating system (OS)
resources in order to ensure correctness in the face of concurrency
and system failures. This thesis proposes system transactions,
with which the programmer
specifies atomic updates to heterogeneous system resources and the OS
guarantees atomicity, consistency, isolation, and durability (ACID).
This thesis provides a model for system transactions as a concurrency control mechanism.
System transactions efficiently and cleanly solve long-standing
concurrency problems that are difficult to address with other
techniques.
For example, malicious users can exploit
race conditions between distinct system calls in privileged applications,
gaining administrative access to a system.
Programmers can eliminate these vulnerabilities by eliminating these
race conditions with system transactions.
Similarly, failed software installations can leave a system unusable.
System transactions can roll back an unsuccessful software installation
without disturbing concurrent, independent updates to the file system.
This thesis describes the design and implementation of TxOS,
a variant of Linux 2.6.22 that implements
system transactions. The thesis contributes new implementation
techniques that yield fast, serializable transactions
with strong isolation and fairness between system transactions and
non-transactional activity.
Using system transactions,
programmers can build
applications with better performance or stronger correctness guarantees
from simpler code. For instance, wrapping an installation of
OpenSSH in a system transaction guarantees that a failed installation
will be rolled back completely. These atomicity properties are
provided by the OS, requiring no modification to the installer itself
and adding only 10% performance overhead. The prototype implementation of system transactions also
minimizes non-transactional overheads. For instance, a non-transactional
compilation of Linux incurs negligible (less than 2%) overhead on TxOS.
Finally, this thesis describes a new lock-free linked list algorithm,
called OLF, for optimistic, lock-free lists.
OLF addresses key limitations of prior algorithms, which
sacrifice functionality for performance.
Prior lock-free list algorithms can
safely insert or delete
a single item, but cannot atomically compose multiple operations
(e.g., atomically move an item between two lists).
OLF provides
both arbitrary composition of list operations as well as performance scalability
close to previous lock-free list designs.
OLF also removes previous requirements for dynamic memory allocation and garbage collection
of list nodes, making it suitable for low-level system software, such as the Linux kernel.
We replace lists
in the Linux kernel's directory cache with OLF lists, which
currently requires a coarse-grained
lock to ensure invariants across multiple lists.
OLF lists in the Linux kernel improve performance of a filesystem metadata microbenchmark
…
Advisors/Committee Members: Witchel, Emmett (advisor), Alvisi, Lorenzo (committee member), McKinley, Kathryn S. (committee member), Shmatikov, Vitaly (committee member), Swift, Michael (committee member).
Subjects/Keywords: Transactions; Operating systems; Concurrency; TxOS; Race conditions; Transactional memory; Lock-free data structures; System transactions; Linked list algorithms; Lock-free algorithm; Lock-free lists
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Porter, D. E. (2010). Operating system transactions. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-12-2488
Chicago Manual of Style (16th Edition):
Porter, Donald E. “Operating system transactions.” 2010. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-12-2488.
MLA Handbook (7th Edition):
Porter, Donald E. “Operating system transactions.” 2010. Web. 06 Mar 2021.
Vancouver:
Porter DE. Operating system transactions. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2010. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-12-2488.
Council of Science Editors:
Porter DE. Operating system transactions. [Doctoral Dissertation]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-12-2488
25.
Clement, Allen Grogan.
UpRight fault tolerance.
Degree: PhD, Computer Sciences, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-12-2359
► Experiences with computer systems indicate an inconvenient truth: computers fail and they fail in interesting ways. Although using redundancy to protect against fail-stop failures is…
(more)
▼ Experiences with computer systems indicate an inconvenient truth: computers fail and they fail in interesting ways. Although using redundancy to protect against fail-stop failures is common practice, non-fail-stop computer and network failures occur for a variety of reasons including power outage, disk or memory corruption, NIC malfunction, user error, operating system and application bugs or misconfiguration, and many others. The impact of these failures can be dramatic, ranging from service unavailability to stranding airplane passengers on the runway to companies closing. While high-stakes embedded systems have embraced Byzantine fault tolerant techniques, general purpose computing continues to rely on techniques that are fundamentally crash tolerant. In a general purpose environment, the current best practices response to non-fail-stop failures can charitably be described as pragmatic: identify a root cause and add checksums to prevent that error from happening again in the future. Pragmatic responses have proven effective for patching holes and protecting against faults once they have occurred; unfortunately the initial damage has already been done, and it is difficult to say if the patches made to address previous faults will protect against future failures. We posit that an end-to-end solution based on Byzantine fault tolerant (BFT) state machine replication is an efficient and deployable alternative to current ad hoc approaches favored in general purpose computing. The replicated state machine approach ensures that multiple copies of the same deterministic application execute requests in the same order and provides end-to-end assurance that independent transient failures will not lead to unavailability or incorrect responses. An efficient and effective end-to-end solution covers faults that have already been observed as well as failures that have not yet occurred, and it provides structural confidence that developers won't have to track down yet another failure caused by some unpredicted memory, disk, or network behavior. While the promise of end-to-end failure protection is intriguing, significant technical and practical challenges currently prevent adoption in general purpose computing environments. On the technical side, it is important that end-to-end solutions maintain the performance characteristics of deployed systems: if end-to-end solutions dramatically increase computing requirements, dramatically reduce throughput, or dramatically increase latency during normal operation then end-to-end techniques are a non-starter. On the practical side, it is important that end-to-end approaches be both comprehensible and easy to incorporate: if the cost of end-to-end solutions is rewriting an application or trusting intricate and arcane protocols, then end-to-end solutions will not be adopted. In this thesis we show that BFT state machine replication can and be used in deployed systems. Reaching this goal requires us to address both the technical and practical challenges previously mentioned. We…
Advisors/Committee Members: Alvisi, Lorenzo (advisor), Dahlin, Mike (advisor), Druschel, Peter (committee member), Walfish, Michael (committee member), Witchell, Emmett (committee member).
Subjects/Keywords: Fault tolerance; Dependability; Byzantine fault tolerance; UpRight fault tolerance; Replicated state machine
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Clement, A. G. (2010). UpRight fault tolerance. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-12-2359
Chicago Manual of Style (16th Edition):
Clement, Allen Grogan. “UpRight fault tolerance.” 2010. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-12-2359.
MLA Handbook (7th Edition):
Clement, Allen Grogan. “UpRight fault tolerance.” 2010. Web. 06 Mar 2021.
Vancouver:
Clement AG. UpRight fault tolerance. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2010. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-12-2359.
Council of Science Editors:
Clement AG. UpRight fault tolerance. [Doctoral Dissertation]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-12-2359
26.
-5508-4043.
Platform-level protection for interacting mobile apps.
Degree: PhD, Computer science, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/43721
► In a modern mobile platform, apps are mutually distrustful, but they share the same device and frequently interact with each other. This dissertation shows how…
(more)
▼ In a modern mobile platform, apps are mutually distrustful, but they share the same device and frequently interact with each other. This dissertation shows how existing platforms, like Android and iOS, often fail to support important data protection scenarios, and describes two systems to improve platform-level security.
First, many data leaks in existing platforms are due to the lack of information flow control for inter-app data exchanges. For example, a document viewer that opens an attachment from an email client often further discloses the attachment to other apps or to the network. To prevent such leaks, we need strict information flow confinement, but a challenge to enforce such confinement in existing platforms is the potential disruptions to confined apps. We present Maxoid, a system that uses context-aware custom views of apps' storage state to make information flow enforcement backward compatible.
Second, apps' abstraction of data has diverged from platforms' abstraction of data. Modern mobile apps heavily rely on structured data, and relational databases have become the hub for apps' internal data management. However, in existing platforms, protection mechanisms are coarse-grained and have no visibility to the structures of apps' data. In these platforms, access control is a mixture of coarse-grained mechanisms and many ad hoc user-level checks, making data protection unprincipled and error-prone. We present Earp, a new mobile platform that combines simple object-level permissions and capability relationships among objects to naturally protect structured data for mobile apps. It achieves a uniform abstraction for storing, sharing and efficiently protecting structured data, for both storage and inter-app services.
Advisors/Committee Members: Witchel, Emmett (advisor), Alvisi, Lorenzo (committee member), Geambasu, Roxana (committee member), Pingali, Keshav (committee member), Shmatikov, Vitaly (committee member).
Subjects/Keywords: Operating systems; Mobile platforms; Security; Confinement
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-5508-4043. (2016). Platform-level protection for interacting mobile apps. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/43721
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-5508-4043. “Platform-level protection for interacting mobile apps.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/43721.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-5508-4043. “Platform-level protection for interacting mobile apps.” 2016. Web. 06 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-5508-4043. Platform-level protection for interacting mobile apps. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/43721.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-5508-4043. Platform-level protection for interacting mobile apps. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/43721
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
27.
Rozner, Eric John.
Combatting loss in wireless networks.
Degree: PhD, Computer Science, 2011, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2011-12-4558
► The wireless medium is lossy due to many reasons, such as signal attenuation, multi-path propagation, and collisions. Wireless losses degrade network throughput, reliability, and latency.…
(more)
▼ The wireless medium is lossy due to many reasons, such as signal attenuation, multi-path propagation, and collisions. Wireless losses degrade network throughput, reliability, and latency. The goal of this dissertation is to combat wireless losses by developing effective techniques and protocols across different network layers. First, a novel opportunistic routing protocol is developed to overcome wireless losses at the network layer. Opportunistic routing protocols exploit receiver diversity to route traffic in the face of loss. A distinctive feature of the protocol is the performance derived from its optimization can be achieved in real IEEE 802.11 networks. At its heart lies a simple yet realistic model of the network that captures wireless interference, losses, traffic, and MAC-induced dependencies. Then a model-driven optimization algorithm is designed to accurately optimize the end-to-end performance, and techniques are developed to map the resulting optimization solutions to practical routing configurations. Its effectiveness is demonstrated using simulation and testbed experiments. Second, an efficient retransmission scheme (ER) is developed at the link layer for wireless networks. Instead of retransmitting lost packets in their original forms, ER codes packets lost at different destinations and uses a single retransmission to potentially recover multiple packet losses. A simple and practical protocol is developed to realize the idea, and it is evaluated using simulation and testbed experiments to demonstrate its effectiveness. Third, detailed measurement traces are collected to understand wireless losses in dynamic and mobile environments. Existing wireless drivers are modified to enable the logging and analysis of network activity under varying end-host configurations. The results indicate that mobile clients can suffer from consecutive packet losses, or burst errors. The burst errors are then analyzed in more detail to gain further insights into the problem. With these insights, recommendations for future research directions to mitigate loss in mobile environments are presented.
Advisors/Committee Members: Qiu, Lili, Ph. D. (advisor), Alvisi, Lorenzo (committee member), Chandra, Ranveer (committee member), de Veciana, Gustavo (committee member), Zhang, Yin (committee member).
Subjects/Keywords: Wireless communication; Loss; 802.11; Opportunistic routing; Retransmissions; Network coding; Wi-Fi; Case study
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rozner, E. J. (2011). Combatting loss in wireless networks. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2011-12-4558
Chicago Manual of Style (16th Edition):
Rozner, Eric John. “Combatting loss in wireless networks.” 2011. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/ETD-UT-2011-12-4558.
MLA Handbook (7th Edition):
Rozner, Eric John. “Combatting loss in wireless networks.” 2011. Web. 06 Mar 2021.
Vancouver:
Rozner EJ. Combatting loss in wireless networks. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2011. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/ETD-UT-2011-12-4558.
Council of Science Editors:
Rozner EJ. Combatting loss in wireless networks. [Doctoral Dissertation]. University of Texas – Austin; 2011. Available from: http://hdl.handle.net/2152/ETD-UT-2011-12-4558

University of Texas – Austin
28.
Yin, Jian.
Volume lease: a scalable cache consistency framework.
Degree: PhD, Computer Sciences, 2003, University of Texas – Austin
URL: http://hdl.handle.net/2152/1088
Subjects/Keywords: Computer architecture; Cache memory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yin, J. (2003). Volume lease: a scalable cache consistency framework. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/1088
Chicago Manual of Style (16th Edition):
Yin, Jian. “Volume lease: a scalable cache consistency framework.” 2003. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/1088.
MLA Handbook (7th Edition):
Yin, Jian. “Volume lease: a scalable cache consistency framework.” 2003. Web. 06 Mar 2021.
Vancouver:
Yin J. Volume lease: a scalable cache consistency framework. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2003. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/1088.
Council of Science Editors:
Yin J. Volume lease: a scalable cache consistency framework. [Doctoral Dissertation]. University of Texas – Austin; 2003. Available from: http://hdl.handle.net/2152/1088

University of Texas – Austin
29.
Aiyer, Amitanand S.
Alternative implementations for storage and communication abstractions in distributed systems.
Degree: PhD, Computer Sciences, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-08-1918
► Abstractions are widely used in building reliable distributed systems as they simplifies the task of building complex systems and aid in reasoning about them. Implementing…
(more)
▼ Abstractions are widely used in building reliable distributed systems as they simplifies the task of building complex systems and aid in reasoning about them. Implementing these abstractions, however, requires making certain assumptions about the environment in which they will be used.
We find that there is a mismatch in the set of assumptions used to implement abstractions in the different layers of a distributed system. This leads to a costlier design and may render the implementation unusable in situations where the assumptions do not hold.
In this dissertation we provide alternative implementations for the abstractions of distributed registers and communication channels that rely on a unified set of assumptions across the different layers of a distributed system.
Advisors/Committee Members: Alvisi, Lorenzo (advisor), Bazzi, Rida A. (committee member), Dahlin, Michael (committee member), Gouda, Mohamed (committee member), Wylie, Jay (committee member).
Subjects/Keywords: Distributed systems; Abstractions
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Aiyer, A. S. (2010). Alternative implementations for storage and communication abstractions in distributed systems. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-08-1918
Chicago Manual of Style (16th Edition):
Aiyer, Amitanand S. “Alternative implementations for storage and communication abstractions in distributed systems.” 2010. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-08-1918.
MLA Handbook (7th Edition):
Aiyer, Amitanand S. “Alternative implementations for storage and communication abstractions in distributed systems.” 2010. Web. 06 Mar 2021.
Vancouver:
Aiyer AS. Alternative implementations for storage and communication abstractions in distributed systems. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2010. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-08-1918.
Council of Science Editors:
Aiyer AS. Alternative implementations for storage and communication abstractions in distributed systems. [Doctoral Dissertation]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-08-1918

University of Texas – Austin
30.
Norman, Alison Nicholas.
Compiler-assisted staggered checkpointing.
Degree: PhD, Computer Sciences, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-08-1746
► To make progress in the face of failures, long-running parallel applications need to save their state, known as a checkpoint. Unfortunately, current checkpointing techniques are…
(more)
▼ To make progress in the face of failures, long-running parallel applications need to save their state, known as a checkpoint. Unfortunately, current checkpointing techniques are becoming untenable on large-scale supercomputers. Many applications checkpoint all processes simultaneously – a technique that is easy to implement but often saturates the network and file system, causing a significant increase in checkpoint overhead. This thesis introduces compiler-assisted staggered checkpointing, where processes checkpoint at different places in the application text, thereby reducing contention for the network and file system. This checkpointing technique is algorithmically challenging since the number of possible solutions is enormous and the number of desirable solutions is small, but we have developed a compiler algorithm that both places staggered checkpoints in an application and ensures that the solution is desirable. This algorithm successfully places staggered checkpoints in parallel applications configured to use tens of thousands of processes. For our benchmarks, this algorithm successfully finds and places useful recovery lines that are up to 37% faster for all configurations than recovery lines where all processes write their data at approximately the same time. We also analyze the success of staggered checkpointing by investigating sets of application and system characteristics for which it reduces network and file system contention. We find that for many configurations, staggered checkpointing reduces both checkpointing time and overall execution time. To perform these analyses, we develop an event-driven simulator for large-scale systems that estimates the behavior of the network, global file system, and local hardware using predictive models. Our simulator allows us to accurately study applications that have thousands of processes; it on average predicts execution times as 83% of their measured value.
Advisors/Committee Members: Lin, Yun Calvin (advisor), Choi, Sung-Eun (committee member), Alvisi, Lorenzo (committee member), McKinley, Kathryn S. (committee member), Pingali, Keshav (committee member).
Subjects/Keywords: Supercomputing; Checkpointing; Simulator; Large-scale parallel applications
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Norman, A. N. (2010). Compiler-assisted staggered checkpointing. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-08-1746
Chicago Manual of Style (16th Edition):
Norman, Alison Nicholas. “Compiler-assisted staggered checkpointing.” 2010. Doctoral Dissertation, University of Texas – Austin. Accessed March 06, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-08-1746.
MLA Handbook (7th Edition):
Norman, Alison Nicholas. “Compiler-assisted staggered checkpointing.” 2010. Web. 06 Mar 2021.
Vancouver:
Norman AN. Compiler-assisted staggered checkpointing. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2010. [cited 2021 Mar 06].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-08-1746.
Council of Science Editors:
Norman AN. Compiler-assisted staggered checkpointing. [Doctoral Dissertation]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-08-1746
.