You searched for +publisher:"University of Texas – Austin" +contributor:("Browne, James C.")
.
Showing records 1 – 12 of
12 total matches.
No search limiters apply to these results.

University of Texas – Austin
1.
Allen, Gregory Eugene.
Computational process networks : a model and framework for high-throughput signal processing.
Degree: PhD, Electrical and Computer Engineering, 2011, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2011-05-2987
► Many signal and image processing systems for high-throughput, high-performance applications require concurrent implementations in order to realize desired performance. Developing software for concurrent systems is…
(more)
▼ Many signal and image processing systems for high-throughput, high-performance applications require concurrent implementations in order to realize desired performance. Developing software for concurrent systems is widely acknowledged to be difficult, with common industry practice leaving the burden of preventing concurrency problems on the programmer.
The Kahn Process Network model provides the mathematically provable property of determinism of a program result regardless of the execution order of its processes, including concurrent execution. This model is also natural for describing streams of data samples in a signal processing system, where processes transform streams from one data type to another. However, a Kahn Process Network may require infinite memory to execute.
I present the dynamic distributed deadlock detection and resolution (D4R) algorithm, which permits execution of Process Networks in bounded
memory if it is possible. It detects local deadlocks in a Process Network, determines whether the deadlock can be resolved and, if so, identifies the process that must take action to resolve the deadlock.
I propose the Computational Process Network (CPN) model which is based on the formalisms of Kahn’s PN model, but with enhancements that are designed to make it efficiently implementable. These enhancements include multi-token transactions to reduce execution overhead, multi-channel queues for multi-dimensional synchronous data, zero-copy semantics, and consumer and producer firing thresholds for queues. Firing thresholds enable memoryless computation of sliding window algorithms, which are common in signal processing systems. I show that the Computational Process Network model preserves the formal properties of Process Networks, while reducing the operations required to implement sliding window algorithms on continuous streams of data.
I also present a high-throughput software framework that implements the Computational Process Network model using
C++, and which maps naturally onto distributed targets. This framework uses POSIX threads, and can exploit parallelism in both multi-core and distributed systems.
Finally, I present case studies to exercise this framework and demonstrate its performance and utility. The final case study is a three-dimensional circular convolution sonar beamformer and replica correlator, which demonstrates the high throughput and scalability of a real-time signal processing algorithm using the CPN model and framework.
Advisors/Committee Members: Evans, Brian L. (Brian Lawrence), 1965- (advisor), Browne, James C. (committee member), Chase, Craig M. (committee member), John, Lizy K. (committee member), Loeffler, Charles M. (committee member).
Subjects/Keywords: Concurrency; Distributed systems; High-performance computing; Signal processing; Deadlock
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Allen, G. E. (2011). Computational process networks : a model and framework for high-throughput signal processing. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2011-05-2987
Chicago Manual of Style (16th Edition):
Allen, Gregory Eugene. “Computational process networks : a model and framework for high-throughput signal processing.” 2011. Doctoral Dissertation, University of Texas – Austin. Accessed March 03, 2021.
http://hdl.handle.net/2152/ETD-UT-2011-05-2987.
MLA Handbook (7th Edition):
Allen, Gregory Eugene. “Computational process networks : a model and framework for high-throughput signal processing.” 2011. Web. 03 Mar 2021.
Vancouver:
Allen GE. Computational process networks : a model and framework for high-throughput signal processing. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2011. [cited 2021 Mar 03].
Available from: http://hdl.handle.net/2152/ETD-UT-2011-05-2987.
Council of Science Editors:
Allen GE. Computational process networks : a model and framework for high-throughput signal processing. [Doctoral Dissertation]. University of Texas – Austin; 2011. Available from: http://hdl.handle.net/2152/ETD-UT-2011-05-2987

University of Texas – Austin
2.
Mahmood, Nasim, 1976-.
Productivity with performance: property/behavior-based automated composition of parallel programs from self-describing components: Property/behavior-based automated composition of parallel programs from self-describing components.
Degree: PhD, Computer Sciences, 2007, University of Texas – Austin
URL: http://hdl.handle.net/2152/3360
► Development of efficient and correct parallel programs is a complex task. These parallel codes have strong requirements for performance and correctness and must operate robustly…
(more)
▼ Development of efficient and correct parallel programs is a complex task. These parallel codes have strong requirements for performance and correctness and must operate robustly and efficiently across a wide spectrum of application parameters and on a wide spectrum of execution environments. Scientific and engineering programs increasingly use adaptive algorithms whose behavior can change dramatically at runtime. Performance properties are often not known until programs are tested and performance may degrade during execution. Many errors in parallel programs arise in incorrect programming of interactions and synchronizations. Testing has proven to be inadequate. Formal proofs of correctness are needed. This research is based on systematic application of software engineering methods to effective development of efficiently executing families of high performance parallel programs. We have developed a framework (P-COM²) for development of parallel program families which addresses many of the problems cited above. The conceptual innovations underlying P-COM² are a software architecture specification language based on self-describing components, a timing and sequencing algorithm which enables execution of programs with both concrete and abstract components and a formal semantics for the architecture specification language. The description of each component incorporates compiler-useable specifications for the properties and behaviors of the components, the functionality a component implements, pre-conditions and postconditions on the inputs and outputs and state machine based sequencing control for invocations of the component. The P-COM² compiler and runtime system implement these concepts to enable: (a) evolutionary development where a program instance is evolved from a performance model to a complete application with performance known at each step of evolution, (b) automated composition of program instances targeting specific application instances and/or execution environments from self-describing components including generation of all parallel structuring, (
c) runtime adaptation of programs on a component by component basis, (d) runtime validation of pre-and post-conditions and sequencing of interactions and (e) formal proofs of correctness for interactions among components based on model checking of the interaction and synchronization properties of the program. The concepts and their integration are defined, the implementation is described and the capabilities of the system are illustrated through several examples.
Advisors/Committee Members: Browne, James C. (advisor).
Subjects/Keywords: Parallel programs (Computer programs); Parallel programs (Computer programs) – Verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mahmood, Nasim, 1. (2007). Productivity with performance: property/behavior-based automated composition of parallel programs from self-describing components: Property/behavior-based automated composition of parallel programs from self-describing components. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/3360
Chicago Manual of Style (16th Edition):
Mahmood, Nasim, 1976-. “Productivity with performance: property/behavior-based automated composition of parallel programs from self-describing components: Property/behavior-based automated composition of parallel programs from self-describing components.” 2007. Doctoral Dissertation, University of Texas – Austin. Accessed March 03, 2021.
http://hdl.handle.net/2152/3360.
MLA Handbook (7th Edition):
Mahmood, Nasim, 1976-. “Productivity with performance: property/behavior-based automated composition of parallel programs from self-describing components: Property/behavior-based automated composition of parallel programs from self-describing components.” 2007. Web. 03 Mar 2021.
Vancouver:
Mahmood, Nasim 1. Productivity with performance: property/behavior-based automated composition of parallel programs from self-describing components: Property/behavior-based automated composition of parallel programs from self-describing components. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2007. [cited 2021 Mar 03].
Available from: http://hdl.handle.net/2152/3360.
Council of Science Editors:
Mahmood, Nasim 1. Productivity with performance: property/behavior-based automated composition of parallel programs from self-describing components: Property/behavior-based automated composition of parallel programs from self-describing components. [Doctoral Dissertation]. University of Texas – Austin; 2007. Available from: http://hdl.handle.net/2152/3360

University of Texas – Austin
3.
Kane, Kevin Michael.
Access control in decentralized, distributed systems.
Degree: PhD, Computer Sciences, 2006, University of Texas – Austin
URL: http://hdl.handle.net/2152/2895
Subjects/Keywords: Computer networks – Access control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kane, K. M. (2006). Access control in decentralized, distributed systems. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/2895
Chicago Manual of Style (16th Edition):
Kane, Kevin Michael. “Access control in decentralized, distributed systems.” 2006. Doctoral Dissertation, University of Texas – Austin. Accessed March 03, 2021.
http://hdl.handle.net/2152/2895.
MLA Handbook (7th Edition):
Kane, Kevin Michael. “Access control in decentralized, distributed systems.” 2006. Web. 03 Mar 2021.
Vancouver:
Kane KM. Access control in decentralized, distributed systems. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2006. [cited 2021 Mar 03].
Available from: http://hdl.handle.net/2152/2895.
Council of Science Editors:
Kane KM. Access control in decentralized, distributed systems. [Doctoral Dissertation]. University of Texas – Austin; 2006. Available from: http://hdl.handle.net/2152/2895

University of Texas – Austin
4.
Xie, Fei.
Integration of model checking into software development processes.
Degree: PhD, Computer Sciences, 2004, University of Texas – Austin
URL: http://hdl.handle.net/2152/1458
► Testing has been the dominant method for validation of software systems. As software systems become complex, conventional testing methods have become inadequate. Model checking is…
(more)
▼ Testing has been the dominant method for validation of software systems. As
software systems become complex, conventional testing methods have become inadequate.
Model checking is a powerful formal verification method. It supports
systematic exploration of all states or execution paths of the system being verified.
There are two major challenges in practical and scalable application of model checking
to software systems: (1) the applicability of model checking to software systems
and (2) the intrinsic complexity of model checking.
In this dissertation, we have developed a comprehensive approach to integration
of model checking into two emerging software development processes: ModelDriven
Development (MDD) and Component-Based Development (CBD), and a
combination of MDD and CBD. This approach addresses the two major challenges
under the following framework: (1) bridging applicability gaps through automatic
translation of software representations to directly model-checkable formal representations, (2) seamless integration of state space reduction algorithms in the translation
through static analysis, and (3) scaling model checking capability and achieving
state space reduction by systematically exploring compositional structures of
software systems. We have integrated model checking into MDD by applying mature
model checking techniques to industrial design-level software representations
through automatic translation of these representations to the input formal representations
of model checkers. We have developed a translation-based approach to
compositional reasoning of software systems, which simplifies the proof, implementation,
and application of compositional reasoning rules at the software system level by
reusing the proof and implementation of existing compositional reasoning rules for
directly model-checkable formal representations. We have developed an integrated
state space reduction framework which systematically conducts a top-down decomposition
of a large and complex software system into directly model-checkable components
by exploring domain-specific knowledge. We have designed, implemented,
and applied a bottom-up approach to model checking of component-based software
systems, which composes verified systems from verified components and integrates
model checking into CBD. We have further scaled model checking of componentbased
systems by exploring the synergy between MDD and CBD, i.e., specifying
components in executable design languages, and realizing the bottom-up approach
based on model checking of software designs through translation.
Advisors/Committee Members: Browne, James C. (advisor).
Subjects/Keywords: Computer software – Verification; Computer software – Development
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xie, F. (2004). Integration of model checking into software development processes. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/1458
Chicago Manual of Style (16th Edition):
Xie, Fei. “Integration of model checking into software development processes.” 2004. Doctoral Dissertation, University of Texas – Austin. Accessed March 03, 2021.
http://hdl.handle.net/2152/1458.
MLA Handbook (7th Edition):
Xie, Fei. “Integration of model checking into software development processes.” 2004. Web. 03 Mar 2021.
Vancouver:
Xie F. Integration of model checking into software development processes. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2004. [cited 2021 Mar 03].
Available from: http://hdl.handle.net/2152/1458.
Council of Science Editors:
Xie F. Integration of model checking into software development processes. [Doctoral Dissertation]. University of Texas – Austin; 2004. Available from: http://hdl.handle.net/2152/1458
5.
Mahimkar, Ajay.
Performance diagnosis in large operational networks.
Degree: PhD, Computer Sciences, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-05-1223
► IP networks have become the unified platform that supports a rice and extremely diverse set of applications and services, including traditional IP data service, Voice…
(more)
▼ IP networks have become the unified platform that supports a rice and extremely diverse set of applications and services, including traditional IP data service, Voice over IP (VoIP), smart mobile devices (e.g., iPhone), Internet television (IPTV) and online gaming. Network performance and reliability are critical issues in today's operational networks because many applications place increasingly stringent reliability and performance requirements. Even the smallest network performance degradation could cause significant customer distress. In addition, new network and service features (e.g., MPLS fast re-route capabilities) are continually rolled out across the network to support new applications, improve network performance, and reduce the operational cost. Network operators are challenged with ensuring that network reliability and performance is improved over time even in the face of constant changes, network and service upgrades and recurring faulty behaviors. It is critical to detect, troubleshoot and repair performance degradations in a timely and accurate fashion. This is extremely challenging in large IP networks due to their massive scale, complicated topology, high protocol complexity, and continuously evolving nature through either software or hardware upgrades, configuration changes or traffic engineering.
In this dissertation, we first propose a novel infrastructure NICE (Network-wide Information Correlation and Exploration) that enables detection and troubleshooting of chronic network conditions by analyzing statistical correlations across multiple data sources. NICE uses a novel circular permutation test to determine the statistical significance of correlation. It also allows flexible analysis at various spatial granularity (e.g., link, router, network level, etc.). We validate NICE using real measurement data collected at a tier-1 ISP network. The results are quite positive. We then apply NICE to troubleshoot real network issues in the tier-1 ISP network. In all three case studies, NICE successfully uncovers previously unknown chronic network conditions, resulting in improved network operations.
Second, we extend NICE to detect and troubleshoot performance problems in IPTV networks. Compared to traditional ISP networks, IPTV distribution network typically adopts a different structure (tree-like multicast as opposed to mesh), imposes more restrictive service constraints (both in reliability and performance), and often faces a much larger scalability issue (managing millions of residential gateways versus thousands of provider-edge routers). Tailoring to the scale and structure of IPTV network, we propose a novel multi-resolution data analysis approach Giza that enables fast detection and localization of regions in the multicast tree hierarchy where the problem becomes significant. Furthermore, we develop several statistical data mining techniques to troubleshoot the identified problems and diagnose their root causes. Validation against operational experiences demonstrates the effectiveness of our…
Advisors/Committee Members: Zhang, Yin, doctor of computer science (advisor), Browne, James C. (committee member), Dhillon, Inderjit (committee member), Qiu, Lili (committee member), Yates, Jennifer (committee member).
Subjects/Keywords: Network operations; Network management; Performance troubleshooting; Reliability; Network upgrades; IPTV; Chronic behaviors
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mahimkar, A. (2010). Performance diagnosis in large operational networks. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-05-1223
Chicago Manual of Style (16th Edition):
Mahimkar, Ajay. “Performance diagnosis in large operational networks.” 2010. Doctoral Dissertation, University of Texas – Austin. Accessed March 03, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-05-1223.
MLA Handbook (7th Edition):
Mahimkar, Ajay. “Performance diagnosis in large operational networks.” 2010. Web. 03 Mar 2021.
Vancouver:
Mahimkar A. Performance diagnosis in large operational networks. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2010. [cited 2021 Mar 03].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-05-1223.
Council of Science Editors:
Mahimkar A. Performance diagnosis in large operational networks. [Doctoral Dissertation]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-05-1223

University of Texas – Austin
6.
Sharygina, Natalia Yevgenyevna.
Model checking of software control systems.
Degree: PhD, Mechanical Engineering., 2002, University of Texas – Austin
URL: http://hdl.handle.net/2152/920
Subjects/Keywords: Automatic control – Data processing; Computer software; Software engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sharygina, N. Y. (2002). Model checking of software control systems. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/920
Chicago Manual of Style (16th Edition):
Sharygina, Natalia Yevgenyevna. “Model checking of software control systems.” 2002. Doctoral Dissertation, University of Texas – Austin. Accessed March 03, 2021.
http://hdl.handle.net/2152/920.
MLA Handbook (7th Edition):
Sharygina, Natalia Yevgenyevna. “Model checking of software control systems.” 2002. Web. 03 Mar 2021.
Vancouver:
Sharygina NY. Model checking of software control systems. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2002. [cited 2021 Mar 03].
Available from: http://hdl.handle.net/2152/920.
Council of Science Editors:
Sharygina NY. Model checking of software control systems. [Doctoral Dissertation]. University of Texas – Austin; 2002. Available from: http://hdl.handle.net/2152/920
7.
Rager, David Lawrence.
Parallelizing an interactive theorem prover : functional programming and proofs with ACL2.
Degree: PhD, Computer Science, 2012, University of Texas – Austin
URL: http://hdl.handle.net/2152/19482
► Multi-core systems have become commonplace, however, theorem provers often do not take advantage of the additional computing resources in an interactive setting. This research explores…
(more)
▼ Multi-core systems have become commonplace, however, theorem provers often do not take advantage of the additional computing resources in an interactive setting. This research explores automatically using these additional resources to lessen the delay between when users submit conjectures to the theorem prover and when they receive feedback from the prover that is useful in discovering how to successfully complete the proof of a particular theorem.
This research contributes mechanisms that permit applicative programs to execute in parallel while simultaneously preparing these programs for verification by a semi-automatic reasoning system. It also contributes a parallel version of an automated theorem prover, with management of user interaction issues, such as output and how inherently single-threaded, user-level proof features can be configured for use with parallel computation. Finally, this dissertation investigates the types of proofs that are amenable to parallel execution. This investigation yields the result that almost all proof attempts that require a non-trivial amount of time can benefit from parallel execution. Proof attempts executed in parallel almost always provide the aforementioned feedback sooner than if they executed serially, and their execution time is often significantly reduced.
Advisors/Committee Members: Hunt, Warren A., 1958- (advisor), Browne, James C (committee member), Kaufmann, Matt (committee member), Moore, J S (committee member), Sawada, Jun (committee member), Witchel, Emmett (committee member).
Subjects/Keywords: Theorem proving; ACL2; Parallel; Functional languages; Parallel proof process
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rager, D. L. (2012). Parallelizing an interactive theorem prover : functional programming and proofs with ACL2. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/19482
Chicago Manual of Style (16th Edition):
Rager, David Lawrence. “Parallelizing an interactive theorem prover : functional programming and proofs with ACL2.” 2012. Doctoral Dissertation, University of Texas – Austin. Accessed March 03, 2021.
http://hdl.handle.net/2152/19482.
MLA Handbook (7th Edition):
Rager, David Lawrence. “Parallelizing an interactive theorem prover : functional programming and proofs with ACL2.” 2012. Web. 03 Mar 2021.
Vancouver:
Rager DL. Parallelizing an interactive theorem prover : functional programming and proofs with ACL2. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2012. [cited 2021 Mar 03].
Available from: http://hdl.handle.net/2152/19482.
Council of Science Editors:
Rager DL. Parallelizing an interactive theorem prover : functional programming and proofs with ACL2. [Doctoral Dissertation]. University of Texas – Austin; 2012. Available from: http://hdl.handle.net/2152/19482

University of Texas – Austin
8.
Song, Jianping.
Constraint-based real-time scheduling for process control.
Degree: PhD, Computer Sciences, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-05-1102
► This research addresses real-time task scheduling in industrial process control. It includes a constraint-based scheduler which is based on MSP.RTL, a tool for real-time multiprocessor…
(more)
▼ This research addresses real-time task scheduling in industrial process control. It includes a constraint-based scheduler which is based on MSP.RTL, a tool for real-time multiprocessor scheduling problems with a wide variety of timing constraints. This dissertation extends previous work in two broad directions: improving the tool itself and broadening the application domain of the tool to include wired and wireless industrial process control. For the tool itself,
we propose enhancements to MSP.RTL in three steps. In the first step, we modify the data structure for representing the temporal constraint graph and cutting the memory usage in half. In the second step, we model the search
problem as a constraint satisfaction problem (CSP) and utilize backmarking and conflict-directed backjumping to speed up the search process. In the third
step, we perform the search from the perspective of constraint satisfaction programming. As a result, we are able to use existing CSP techniques efficiently, such as look ahead, backjumping and consistency checking. Compared to the various ad hoc heuristics used in the original version, the new approach is more systematic and powerful.
To exercise the new MSP.RTL tool, we acquired an updated version of
the Boeing 777 Integrated Airplane Information Management System(AIMS). This new benchmark problem is more complicated than the old one used in the original tool in that data communications are described in messages and
a message can have multiple senders and receivers. The new MSP.RTL tool successfully solved the new benchmark problem, whereas the old tool would not be able to do so.
In order to apply real-time scheduling in industrial process control, we carry out our research in two directions. First, we apply the improved tool to traditional wired process control. The tool has been successfully applied to
solve the block assignment problem in Fieldbus networks, where each block comprising the control system is assigned to a specific device such that certain
metrics of the system can be optimized. Wireless industrial control has received a lot of attention recently. We experimented with the tool to schedule communications on a simulated wireless industrial network.
In order to integrate the scheduler in real wireless process control systems, we are building an experimental platform based on the WirelessHART standard. WirelessHART, as the first open wireless standard for process control, defines a time synchronized MAC layer, which is ideal for real time process
control. We have successfully implemented a prototype WirelessHART stack on Freescale JM128 toolkits and built some demo applications on top of it.
Even with the scheduler tool to regulate communications in a wireless process control, it may still be possible that communications cannot be established on an inferior wireless link within an expected period. In order to handle this type of failures, we propose to make the control modules aware of
the unreliability of wireless links, that is, to make the control…
Advisors/Committee Members: Mok, Aloysius Ka-Lau (advisor), Browne, James C. (committee member), Gouda, Mohamed G. (committee member), Zhang, Yin (committee member), Chen, Deji (committee member).
Subjects/Keywords: Multiprocessor scheduling; Industrial process control; MSP.RTL; Fieldbus; WirelessHART; Wireless sensor networks; PID; PIDPlus
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Song, J. (2010). Constraint-based real-time scheduling for process control. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-05-1102
Chicago Manual of Style (16th Edition):
Song, Jianping. “Constraint-based real-time scheduling for process control.” 2010. Doctoral Dissertation, University of Texas – Austin. Accessed March 03, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-05-1102.
MLA Handbook (7th Edition):
Song, Jianping. “Constraint-based real-time scheduling for process control.” 2010. Web. 03 Mar 2021.
Vancouver:
Song J. Constraint-based real-time scheduling for process control. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2010. [cited 2021 Mar 03].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-05-1102.
Council of Science Editors:
Song J. Constraint-based real-time scheduling for process control. [Doctoral Dissertation]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-05-1102

University of Texas – Austin
9.
Shao, Danhua.
Application of local semantic analysis in fault prediction and detection.
Degree: PhD, Electrical and Computer Engineering, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-05-1086
► To improve quality of software systems, change-based fault prediction and scope-bounded checking have been used to predict or detect faults during software development. In fault…
(more)
▼ To improve quality of software systems, change-based fault prediction and scope-bounded checking have been used to predict or detect faults during software development. In fault prediction, changes to program source code, such as added lines or deleted lines, are used to predict potential faults. In fault detection, scope-bounded checking of programs is an effective technique for finding subtle faults. The central idea is to check all program executions up to a given bound. The technique takes two basic forms: scope-bounded static checking, where all bounded executions of a program are transformed into a formula that represents the violation of a correctness property and any solution to the formula represents a counterexample; or scope-bounded testing where a program is tested against all (small) inputs up to a given bound on the input size.
Although the accuracies of change-based fault prediction and scope-bounded checking have been evaluated with experiments, both of them have effectiveness and efficiency limitations. Previous change-based fault predictions only consider the code modified by a change while ignoring the code impacted by a change. Scope-bounded testing only concerns the correctness specifications, and the internal structure of a program is ignored. Although scope-bounded static checking considers the internal structure of programs, formulae translated from structurally complex programs might choke the backend analyzer and fail to give a result within a reasonable time.
To improve effectiveness and efficiency of these approaches, we introduce local semantic analysis into change-based fault prediction and scope-bounded checking. We use data-flow analysis to disclose internal dependencies within a program. Based on these dependencies, we identify code segments impacted by a change and apply fault prediction metrics on impacted code. Empirical studies with real data showed that semantic analysis is effective and efficient in predicting faults in large-size changes or short-interval changes. While generating inputs for scope-bounded testing, we use control-flow to guide test generation so that code coverage can be achieved with minimal tests. To increase the scalability of scope-bounded checking, we split a bounded program into smaller sub-programs according to data-flow and control-flow analysis. Thus the problem of scope-bounded checking for the given program reduces to several sub-problems, where each sub-problem requires the constraint solver to check a less complex formula, thereby likely reducing the solver’s overall workload. Experimental results show that our approach provides significant speed-ups over the traditional approach.
Advisors/Committee Members: Khurshid, Sarfraz (advisor), Perry, Dewayne E. (advisor), Julien, Christine (committee member), Barber, Suzanne (committee member), Browne, James C. (committee member), Lifschitz, Vladimir (committee member).
Subjects/Keywords: Version management; Semantic analysis; Data-flow analysis; Control-flow analysis; Scope-bounded checking; Alloy; First-order logic; SAT; Computation graph; White-box testing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shao, D. (2010). Application of local semantic analysis in fault prediction and detection. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-05-1086
Chicago Manual of Style (16th Edition):
Shao, Danhua. “Application of local semantic analysis in fault prediction and detection.” 2010. Doctoral Dissertation, University of Texas – Austin. Accessed March 03, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-05-1086.
MLA Handbook (7th Edition):
Shao, Danhua. “Application of local semantic analysis in fault prediction and detection.” 2010. Web. 03 Mar 2021.
Vancouver:
Shao D. Application of local semantic analysis in fault prediction and detection. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2010. [cited 2021 Mar 03].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-05-1086.
Council of Science Editors:
Shao D. Application of local semantic analysis in fault prediction and detection. [Doctoral Dissertation]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-05-1086

University of Texas – Austin
10.
Poon, Wing-Chi.
Real-time hierarchical hypervisor.
Degree: PhD, Computer Sciences, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-08-1842
► Both real-time virtualization and recursive virtualization are desirable properties of a virtual machine monitor (or hypervisor). Although the prospect for virtualization and even recursive virtualization…
(more)
▼ Both real-time virtualization and recursive virtualization are desirable properties of a virtual machine monitor (or hypervisor). Although the prospect for virtualization and even recursive virtualization has become better as the PC hardware becomes faster, the real-time systems community so far has not been able to reap much benefits. This is because no existing virtualization mechanism can properly support the stringent timing requirements needed by real-time systems. It is hard to do real-time virtualization, and it is even harder to do it recursively. In this dissertation, we propose a framework whereby the hypervisor is capable of running real-time guests and participating in recursive virtualization. Such a hypervisor is called a real-time hierarchical hypervisor.
We first look at virtualization of abstract resource types from the real-time systems perspective. Unlike the previous work on recursive real-time partitioning that assumes fully-preemptable resources, we concentrate on other and often more practical types of scheduling constraints, especially the non-preemptive and limited-preemptive ones. Then we consider the current x86 architecture and explore the problems that need to be addressed for real-time recursive virtualization. We drill down on the problem that affects timing properties the most, namely, the recursive forwarding and delivery of interrupts, exceptions and intercepts. We choose the x86 architecture because it is popular and readily available, but it is by no means the only architecture of choice for real-time recursive virtualization. We conclude the research with an architecture-independent discussion on future possibilities in real-time recursive virtualization.
Advisors/Committee Members: Mok, Aloysius Ka-Lau (advisor), Browne, James C. (committee member), Dahlin, Mike (committee member), Plaxton, Greg (committee member), Chen, Deji (committee member).
Subjects/Keywords: Real-time; Recursive virtualization; Abstract resource; Bounded delay resource partition; x86 architecture; Non-preemptive scheduling; Robustness; Interrupt forwarding; Hierarchical hypervisor
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Poon, W. (2010). Real-time hierarchical hypervisor. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-08-1842
Chicago Manual of Style (16th Edition):
Poon, Wing-Chi. “Real-time hierarchical hypervisor.” 2010. Doctoral Dissertation, University of Texas – Austin. Accessed March 03, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-08-1842.
MLA Handbook (7th Edition):
Poon, Wing-Chi. “Real-time hierarchical hypervisor.” 2010. Web. 03 Mar 2021.
Vancouver:
Poon W. Real-time hierarchical hypervisor. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2010. [cited 2021 Mar 03].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-08-1842.
Council of Science Editors:
Poon W. Real-time hierarchical hypervisor. [Doctoral Dissertation]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-08-1842

University of Texas – Austin
11.
Chang, Walter Chochen.
Improving dynamic analysis with data flow analysis.
Degree: PhD, Computer Sciences, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-08-1586
► Many challenges in software quality can be tackled with dynamic analysis. However, these techniques are often limited in their efficiency or scalability as they are…
(more)
▼ Many challenges in software quality can be tackled with dynamic analysis. However, these techniques are often limited in their efficiency or scalability as they are often applied uniformly to an entire program. In this thesis, we show that dynamic program analysis can be made significantly more efficient and scalable by first performing a static data flow analysis so that the dynamic analysis can be selectively applied only to important parts of the program. We apply this general principle to the design and implementation of two different systems, one for runtime security policy enforcement and the other for software test input generation.
For runtime security policy enforcement, we enforce user-defined policies using a dynamic data flow analysis that is more general and flexible than previous systems. Our system uses the user-defined policy to drive a static data flow analysis that identifies and instruments only the statements that may be involved in a security vulnerability, often eliminating the need to track most objects and greatly reducing the overhead. For taint analysis on a set of five
server programs, the slowdown is only 0.65%, two orders of magnitude lower than previous taint tracking systems. Our system also has negligible overhead on file disclosure vulnerabilities, a problem that taint tracking cannot handle.
For software test case generation, we introduce the idea of targeted testing, which focuses testing effort on select parts of the program instead of treating all program paths equally. Our “Bullseye” system uses a static analysis performed with respect to user-defined “interesting points” to steer the search down certain paths, thereby finding bugs faster. We also introduce a compiler transformation that allows symbolic execution to automatically perform boundary condition testing, revealing bugs that could be missed even if the correct path is tested. For our set of 9 benchmarks, Bullseye finds bugs an average of 2.5× faster than a conventional depth-first search and finds numerous bugs that DFS could not. In addition, our automated boundary condition testing transformation allows both Bullseye and depth-first search to find numerous bugs that they could not find before, even when all paths were explored.
Advisors/Committee Members: Lin, Yun Calvin (advisor), McKinley, Kathryn (committee member), Browne, James C. (committee member), Khurshid, Sarfraz (committee member), Myers, Andrew (committee member).
Subjects/Keywords: Data flow; Software testing; Software security; Dynamic analysis; Static analysis; Test input generation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chang, W. C. (2010). Improving dynamic analysis with data flow analysis. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-08-1586
Chicago Manual of Style (16th Edition):
Chang, Walter Chochen. “Improving dynamic analysis with data flow analysis.” 2010. Doctoral Dissertation, University of Texas – Austin. Accessed March 03, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-08-1586.
MLA Handbook (7th Edition):
Chang, Walter Chochen. “Improving dynamic analysis with data flow analysis.” 2010. Web. 03 Mar 2021.
Vancouver:
Chang WC. Improving dynamic analysis with data flow analysis. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2010. [cited 2021 Mar 03].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-08-1586.
Council of Science Editors:
Chang WC. Improving dynamic analysis with data flow analysis. [Doctoral Dissertation]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-08-1586

University of Texas – Austin
12.
Chan, Ernie W., 1982-.
Application of dependence analysis and runtime data flow graph scheduling to matrix computations.
Degree: PhD, Computer Sciences, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-08-1563
► We present a methodology for exploiting shared-memory parallelism within matrix computations by expressing linear algebra algorithms as directed acyclic graphs. Our solution involves a separation…
(more)
▼ We present a methodology for exploiting shared-memory parallelism within matrix computations by expressing linear algebra algorithms as directed acyclic graphs. Our solution involves a separation of concerns that completely hides the exploitation of parallelism from the code that implements the linear algebra algorithms. This approach to the problem is fundamentally different since we also address the issue of programmability instead of strictly focusing on parallelization. Using the separation of concerns, we present a framework for analyzing and developing scheduling algorithms and heuristics for this problem domain. As such, we develop a theory and practice of scheduling concepts for matrix computations in this dissertation.
Advisors/Committee Members: Van de Geijn, Robert A. (advisor), Browne, James C. (committee member), Lin, Calvin (committee member), Pingali, Keshav (committee member), Plaxton, Charles G. (committee member), Quintana-Orti, Enrique S. (committee member).
Subjects/Keywords: Matrix computation; Directed acyclic graph; Algorithm-by-blocks
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chan, Ernie W., 1. (2010). Application of dependence analysis and runtime data flow graph scheduling to matrix computations. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-08-1563
Chicago Manual of Style (16th Edition):
Chan, Ernie W., 1982-. “Application of dependence analysis and runtime data flow graph scheduling to matrix computations.” 2010. Doctoral Dissertation, University of Texas – Austin. Accessed March 03, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-08-1563.
MLA Handbook (7th Edition):
Chan, Ernie W., 1982-. “Application of dependence analysis and runtime data flow graph scheduling to matrix computations.” 2010. Web. 03 Mar 2021.
Vancouver:
Chan, Ernie W. 1. Application of dependence analysis and runtime data flow graph scheduling to matrix computations. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2010. [cited 2021 Mar 03].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-08-1563.
Council of Science Editors:
Chan, Ernie W. 1. Application of dependence analysis and runtime data flow graph scheduling to matrix computations. [Doctoral Dissertation]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-08-1563
.