You searched for +publisher:"University of Texas – Austin" +contributor:("Perry, Dewayne")
.
Showing records 1 – 30 of
43 total matches.
◁ [1] [2] ▶

University of Texas – Austin
1.
Vadysirisack, Pang Lithisay.
Practical software testing for an FDA-regulated environment.
Degree: MSin Engineering, Electrical and Computer Engineering, 2011, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2011-12-4719
► Unlike hardware, software does not degrade over time or frequency use. This is good for software. Also unlike hardware, software can be easily changed. This…
(more)
▼ Unlike hardware, software does not degrade over time or frequency use. This is good for software. Also unlike hardware, software can be easily changed. This unique characteristic gives software much of its power, but is also responsible for possible failures in software applications. When software is used within medical devices, software failures may result in bodily injury or death. As a result, regulations have been imposed on the makers of medical devices to ensure their safety, which includes the safety of the devices’ software. The U.S. Food and Drug Administration requires establishment of systems and control processes to ensure quality devices. A principal part of the quality assurance effort is testing. This paper explores the unique role of software testing in the design, development, and release of software used for medical devices and applications. It also provides practical, industry-driven guidance on medical device software testing techniques and strategies.
Advisors/Committee Members: Khurshid, Sarfraz (advisor), Perry, Dewayne E. (committee member).
Subjects/Keywords: Software; Software testing; FDA; Regulations; Medical devices; Software design lifecycle; Regulatory compliance
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Vadysirisack, P. L. (2011). Practical software testing for an FDA-regulated environment. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2011-12-4719
Chicago Manual of Style (16th Edition):
Vadysirisack, Pang Lithisay. “Practical software testing for an FDA-regulated environment.” 2011. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2011-12-4719.
MLA Handbook (7th Edition):
Vadysirisack, Pang Lithisay. “Practical software testing for an FDA-regulated environment.” 2011. Web. 05 Mar 2021.
Vancouver:
Vadysirisack PL. Practical software testing for an FDA-regulated environment. [Internet] [Masters thesis]. University of Texas – Austin; 2011. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2011-12-4719.
Council of Science Editors:
Vadysirisack PL. Practical software testing for an FDA-regulated environment. [Masters Thesis]. University of Texas – Austin; 2011. Available from: http://hdl.handle.net/2152/ETD-UT-2011-12-4719

University of Texas – Austin
2.
Kim, Jongwook.
Paan : a tool for back-propagating changes to projected documents.
Degree: MSin Computer Sciences, Computer Science, 2011, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2011-05-3534
► Research in Software Product Line Engineering (SPLE) traditionally focuses on product derivation. Prior work has explored the automated derivation of products by module composition. However,…
(more)
▼ Research in Software Product Line Engineering (SPLE) traditionally focuses on product derivation. Prior work has explored the automated derivation of products by module composition. However, it has so far neglected propagating changes (edits) in a product back to the product line definition. A domain-specific product should be possible to update its features locally, and later these changes should be propagated back to the product line definition automatically. Otherwise, the entire product line has to be revised manually in order to make the changes permanent. Although this is the current state, it is a very error-prone process. To address these issues, we present a tool called Paan to create product lines of MS Word documents with back-propagation support. It is a diff-based tool that ignores unchanged fragments and reveals fragments that are changed, added or deleted. Paan takes a document with variation points (VPs) as input, and shreds it into building blocks called tiles. Only those tiles that are new or have changed must be updated in the tile repository. In this way, changes in composed documents can be back-propagated to their original feature module definitions. A document is synthesized by retrieving the appropriate tiles and composing them.
Advisors/Committee Members: Batory, Don S., 1953- (advisor), Perry, Dewayne (committee member).
Subjects/Keywords: Software product line engineering; Software engineering; Document editing; MS Word; Word processing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kim, J. (2011). Paan : a tool for back-propagating changes to projected documents. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2011-05-3534
Chicago Manual of Style (16th Edition):
Kim, Jongwook. “Paan : a tool for back-propagating changes to projected documents.” 2011. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2011-05-3534.
MLA Handbook (7th Edition):
Kim, Jongwook. “Paan : a tool for back-propagating changes to projected documents.” 2011. Web. 05 Mar 2021.
Vancouver:
Kim J. Paan : a tool for back-propagating changes to projected documents. [Internet] [Masters thesis]. University of Texas – Austin; 2011. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2011-05-3534.
Council of Science Editors:
Kim J. Paan : a tool for back-propagating changes to projected documents. [Masters Thesis]. University of Texas – Austin; 2011. Available from: http://hdl.handle.net/2152/ETD-UT-2011-05-3534

University of Texas – Austin
3.
-0930-2473.
Non-semantics-preserving transformations for higher-coverage test generation using symbolic execution.
Degree: MSin Engineering, Electrical and Computer engineering, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/39064
► Symbolic execution is a well-studied method that can produce high-quality test suites for programs. However, scaling it to real-world applications is a significant challenge, as…
(more)
▼ Symbolic execution is a well-studied method that can produce high-quality test suites for programs. However, scaling it to real-world applications is a significant challenge, as it depends on the expensive process of solving constraints on program inputs. Our insight is that non-semantics-preserving program transformations can reduce the cost of symbolic execution and the tests generated for the transformed programs can still serve as quality suites for the original program. We present several such transformations that are designed to improve test input generation and/or provide faster symbolic execution. We evaluated these optimizations using a suite of small examples and a substantial subset of Unix's Coreutils. In more than 50% of cases, our approach reduces the test generation time and increases the code coverage.
Advisors/Committee Members: Khurshid, Sarfraz (advisor), Perry, Dewayne (committee member).
Subjects/Keywords: Symbolic execution; Compiler optimizations; Non-semantics-preserving transformations; Testability transformations; Test generation; LLVM; KLEE
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-0930-2473. (2016). Non-semantics-preserving transformations for higher-coverage test generation using symbolic execution. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/39064
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-0930-2473. “Non-semantics-preserving transformations for higher-coverage test generation using symbolic execution.” 2016. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/39064.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-0930-2473. “Non-semantics-preserving transformations for higher-coverage test generation using symbolic execution.” 2016. Web. 05 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-0930-2473. Non-semantics-preserving transformations for higher-coverage test generation using symbolic execution. [Internet] [Masters thesis]. University of Texas – Austin; 2016. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/39064.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-0930-2473. Non-semantics-preserving transformations for higher-coverage test generation using symbolic execution. [Masters Thesis]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/39064
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

University of Texas – Austin
4.
Ren, Xiaofei, M.S. in Engineering.
A case study of lean software practices in an IT application support department.
Degree: MSin Engineering, Software Engineering, 2011, University of Texas – Austin
URL: http://hdl.handle.net/2152/41440
► The concept of lean manufacturing was formed at Toyota by Taiichi Ohno, who originated the system of “Just-in-Time” production with the goals of delivering high…
(more)
▼ The concept of lean manufacturing was formed at Toyota by Taiichi Ohno, who originated the system of “Just-in-Time” production with the goals of delivering high value and cutting down waste. These concepts were partially adapted to software development in an Agile development context [1] where the goal is to deliver value to the customers more quickly by eliminating waste and improving quality. However, we are not aware of any published attempt to adapt lean principles to IT maintenance work.
The purpose of the case study reported here is to demonstrate that the principles of lean software development could be effectively applied to a specific IT application support department. It is an empirical study of lean practices in the maintenance department of a large organization. A comparison was made from the collected data from our release management tool before and after applying the lean principles to our IT group. Our analysis shows that the lean principles improved the developers’ focus on the given corrective or preventive task. Application quality also improved to a significant extent. More importantly, our customers did see more efficient support efforts that delivered good quality in a shorter time. All in all, the newly conceived support process adapting lean principles to our situation did, in fact, deliver more highly valued software to our customers more quickly while cutting down waste. On the other hand, we also learned that there were some challenges that arose from a conflict between the new lean practices and our previous practices. The most significant of these conflicts was revealed in developer work load imbalances and customer confusion due to having to communicate with different IT support teams for different type of maintenance requests. A future adjustment of how the lean principles can be applied to IT maintenance may be necessary.
Advisors/Committee Members: Perry, Dewayne E. (advisor), Krasner, Herb (advisor).
Subjects/Keywords: Lean principles; Waste reduction; Customer satisfaction; Productivity; Software quality
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ren, Xiaofei, M. S. i. E. (2011). A case study of lean software practices in an IT application support department. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/41440
Chicago Manual of Style (16th Edition):
Ren, Xiaofei, M S in Engineering. “A case study of lean software practices in an IT application support department.” 2011. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/41440.
MLA Handbook (7th Edition):
Ren, Xiaofei, M S in Engineering. “A case study of lean software practices in an IT application support department.” 2011. Web. 05 Mar 2021.
Vancouver:
Ren, Xiaofei MSiE. A case study of lean software practices in an IT application support department. [Internet] [Masters thesis]. University of Texas – Austin; 2011. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/41440.
Council of Science Editors:
Ren, Xiaofei MSiE. A case study of lean software practices in an IT application support department. [Masters Thesis]. University of Texas – Austin; 2011. Available from: http://hdl.handle.net/2152/41440

University of Texas – Austin
5.
Wang, Kaiyuan.
MuAlloy : an automated mutation system for alloy.
Degree: MSin Engineering, Electrical and Computer Engineering, 2015, University of Texas – Austin
URL: http://hdl.handle.net/2152/31865
► Mutation is a powerful technique that researchers have studied for several decades in the context of imperative code. For example, mutation testing is commonly considered…
(more)
▼ Mutation is a powerful technique that researchers have studied for several decades in the context of imperative code. For example, mutation testing is commonly considered a '"gold standard"' for test suite quality. Mutation in the context of declarative languages is a less studied problem. This thesis introduces a foundation for mutation-driven analyses for Alloy, a first-order, declarative language based on relations. Specifically, we introduce a family of mutation operators for Alloy models and define algorithms for applying the operators on different parts of the models. We embody these operators and algorithms in our prototype tool MuAlloy that provides a GUI-based front-end for customizing the application of mutation operators. To demonstrate the potential of our approach, we illustrate the use of MuAlloy in two application scenarios: (1) mutation testing for Alloy (in the spirit of traditional mutation testing for imperative languages); and (2) program repair for Alloy using mutation.
Advisors/Committee Members: Khurshid, Sarfraz (advisor), Perry, Dewayne E. (committee member).
Subjects/Keywords: MuAlloy; Mutation; Alloy; Mutation testing; Repair
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, K. (2015). MuAlloy : an automated mutation system for alloy. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/31865
Chicago Manual of Style (16th Edition):
Wang, Kaiyuan. “MuAlloy : an automated mutation system for alloy.” 2015. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/31865.
MLA Handbook (7th Edition):
Wang, Kaiyuan. “MuAlloy : an automated mutation system for alloy.” 2015. Web. 05 Mar 2021.
Vancouver:
Wang K. MuAlloy : an automated mutation system for alloy. [Internet] [Masters thesis]. University of Texas – Austin; 2015. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/31865.
Council of Science Editors:
Wang K. MuAlloy : an automated mutation system for alloy. [Masters Thesis]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/31865

University of Texas – Austin
6.
Baum, Mark Vincent.
Refactoring for Software Transactional Memory.
Degree: MSin Engineering, Electrical and Computer Engineering, 2011, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2011-12-4741
► Software transactional memory (STM) is an optimistic concurrent lock free mechanism that has the potential of positively transforming how concurrent programming is performed. STM, despite…
(more)
▼ Software transactional memory (STM) is an optimistic concurrent lock free mechanism that has the potential of positively transforming how concurrent programming is performed. STM, despite its many desirable attributes, is not yet a ubiquitous programming language feature in the commercial software domain. There are many implementation challenges with retrofitting STM into pre-existing language frameworks. Furthermore, existing software systems will also need to be refactored in order to take advantage of STM’s unique benefits. As with other time consuming and error prone refactoring processes, refactoring for STM is best done with automated tool support; it is the aim of this paper to propose such a tool.
Advisors/Committee Members: Kim, Miryung (advisor), Perry, Dewayne (committee member).
Subjects/Keywords: Refactoring; Automated tool supported refactoring; Software Transactional Memory; Refactoring for Software Transactional Memory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Baum, M. V. (2011). Refactoring for Software Transactional Memory. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2011-12-4741
Chicago Manual of Style (16th Edition):
Baum, Mark Vincent. “Refactoring for Software Transactional Memory.” 2011. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2011-12-4741.
MLA Handbook (7th Edition):
Baum, Mark Vincent. “Refactoring for Software Transactional Memory.” 2011. Web. 05 Mar 2021.
Vancouver:
Baum MV. Refactoring for Software Transactional Memory. [Internet] [Masters thesis]. University of Texas – Austin; 2011. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2011-12-4741.
Council of Science Editors:
Baum MV. Refactoring for Software Transactional Memory. [Masters Thesis]. University of Texas – Austin; 2011. Available from: http://hdl.handle.net/2152/ETD-UT-2011-12-4741

University of Texas – Austin
7.
Singh, Punit.
Usability and productivity for silicon debug software: a case study.
Degree: MSin Engineering, Electrical and Computer Engineering, 2011, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2011-12-4470
► Semiconductor manufacturing is complex. Companies strive to lead in the markets by delivering timely chips which are bug (a.k.a defect) free and have very low…
(more)
▼ Semiconductor manufacturing is complex. Companies strive to lead in the markets by delivering timely chips which are bug (a.k.a defect) free and have very low power consumption. The new research drives new features in chips. The case study research reported here is about the usability and productivity of the silicon debug software tools. Silicon debug software tools are a set of software used to find bugs before delivering chips to the customer. The study has an objective to improve usability and productivity of the tools, by introducing metrics. The results of the measurements drive a concrete plan of action. The GQM (Goal, Questions, Metrics) methodology was used to define and gather data for the measurements. The project was developed in two parts or phases. We took the measurements using the method over the two phases of the tool development. The findings from phase one improved the tool usability in the second phase. The lesson learnt is that tool usability is a complex measurement. Improving usability means that the user will use less of the tool help button; the user will have less downtime and will not input incorrect data. Even though for this study the focus was on three important tools, the same usability metrics can be applied to the remaining five tools. For defining productivity metrics, we also used the GQM methodology. A productivity measurement using historic data was done to establish a baseline. The baseline measurements identified some existing bottlenecks in the overall silicon debug process. We link productivity to time it takes for a debug tool user to complete the assigned task(s). The total time taken for using all the tools does not give us any actionable items for improving productivity. We will need to measure the time it takes for use of each tool in the debug process to give us actionable items. This is identified as future work. To improve usability we recommend making tools that are more robust to error handling and having good help features. To improve productivity we recommend getting data on where the user is spending most of the debug time. Then, we can focus on improving that time-consuming part of debug to make the users more productive.
Advisors/Committee Members: Krasner, Herb (advisor), Perry, Dewayne E. (advisor).
Subjects/Keywords: Silicon debug; Usability; Productivity; Software metrics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Singh, P. (2011). Usability and productivity for silicon debug software: a case study. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2011-12-4470
Chicago Manual of Style (16th Edition):
Singh, Punit. “Usability and productivity for silicon debug software: a case study.” 2011. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2011-12-4470.
MLA Handbook (7th Edition):
Singh, Punit. “Usability and productivity for silicon debug software: a case study.” 2011. Web. 05 Mar 2021.
Vancouver:
Singh P. Usability and productivity for silicon debug software: a case study. [Internet] [Masters thesis]. University of Texas – Austin; 2011. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2011-12-4470.
Council of Science Editors:
Singh P. Usability and productivity for silicon debug software: a case study. [Masters Thesis]. University of Texas – Austin; 2011. Available from: http://hdl.handle.net/2152/ETD-UT-2011-12-4470

University of Texas – Austin
8.
Qamar, Nabil.
WiFi-Med : implementation of a ubiquitous health monitoring system on an Android platform.
Degree: MSin Engineering, Electrical and Computer Engineering, 2011, University of Texas – Austin
URL: http://hdl.handle.net/2152/44633
► Recent technological advances in biosensors, wireless networking, and mobile computing have enabled the design of systems which are capable of autonomously monitoring various vital signs…
(more)
▼ Recent technological advances in biosensors, wireless networking, and mobile computing have enabled the design of systems which are capable of autonomously monitoring various vital signs and providing personalized feedback (e.g. alerts, alarms, and triggers) for the user in real-time. As technology advances, there is no doubt that quality of life will improve for patients and the medical world alike. This thesis describes WiFi-Med, a client side, mobile application built on the Android platform. Our project is designed to enable a mobile device user to aggregate and monitor physiological data through wireless biosensors. Currently, our focus is to develop and improve an Android application by using simulated physiological data. Once perfected, WiFi-Med application can be easily integrated with a body sensor network. First, we present the motivation behind WiFi-Med through real life user scenarios, followed by an introduction to the Android platform architecture. Next, we describe application design and architecture, implementation model and test strategy. Finally, we conclude with a discussion of future development ideas and present our thoughts on prospects of collaborating WiFi-Med and biosensors in ubiquitous computing environments.
Advisors/Committee Members: Perry, Dewayne E. (advisor), Aziz, Adnan (committee member).
Subjects/Keywords: Android; Biosensors; Wi-Fi; ECG; Mobile Computing; Ubiquitous computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Qamar, N. (2011). WiFi-Med : implementation of a ubiquitous health monitoring system on an Android platform. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/44633
Chicago Manual of Style (16th Edition):
Qamar, Nabil. “WiFi-Med : implementation of a ubiquitous health monitoring system on an Android platform.” 2011. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/44633.
MLA Handbook (7th Edition):
Qamar, Nabil. “WiFi-Med : implementation of a ubiquitous health monitoring system on an Android platform.” 2011. Web. 05 Mar 2021.
Vancouver:
Qamar N. WiFi-Med : implementation of a ubiquitous health monitoring system on an Android platform. [Internet] [Masters thesis]. University of Texas – Austin; 2011. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/44633.
Council of Science Editors:
Qamar N. WiFi-Med : implementation of a ubiquitous health monitoring system on an Android platform. [Masters Thesis]. University of Texas – Austin; 2011. Available from: http://hdl.handle.net/2152/44633

University of Texas – Austin
9.
Zheng, Xi, Ph. D.
Physically informed runtime verification for cyber physical systems.
Degree: PhD, Electrical and Computer engineering, 2015, University of Texas – Austin
URL: http://hdl.handle.net/2152/31413
► Cyber-physical systems (CPS) are an integration of computation with physical processes. CPS have gained popularity both in industry and the research community and are represented…
(more)
▼ Cyber-physical systems (CPS) are an integration of computation with physical processes. CPS have gained popularity both in industry and the research community and are represented by many varied mission critical applications. Debugging CPS is important, but the intertwining of the cyber and physical worlds makes it very difficult. Formal methods, simulation, and testing are not sufficient in guarantee required correctness. Runtime Verification (RV) provides a perfect complement. However the state of the art in RV lacks either efficiency or expressiveness, and very few RV technologies are specifically designed for CPS. The CPS community requires an intuitive, expressive, and practical RV middleware toolset to improve the state of the art. In this proposal, I take an incremental and realistic approach to identify and address the research challenges in CPS verification and validation. Firstly, I carry out a systematic analysis of the state of the art and state of the practice in verifying and validating CPS using a structured on-line survey, semi-structured interviews, and an exhaustive literature review. From the findings obtained, I identify the key research gaps and propose research directions to address these research gaps. My second work is to work on the most pertinent research direction proposed, which is to provide a practical and physically informed runtime verification tool-sets specifically designed for CPS as a sound foundation to the trial and error practice identified as the state of the art in verifying and validating CPS. I create an expressive yet intuitive language (BraceAssertion) to specify CPS properties. I develop a framework (BraceBind) to supplement CPS runtime verification with a real time simulation environment which is able to integrate physical models from various simulation platform. Based on BraceAssertion and BraceBind, which collectively captures the combination of logical content and physical environment, I develop a practical runtime verification framework (Brace), which is efficient, effective, expressive in capturing both local and global properties, and guarantee predictable runtime monitors behavior even with unpredictable surge of events. I evaluate the tool-set with increasingly complex real CPS applications of smart agent systems.
Advisors/Committee Members: Julien, Christine, D. Sc. (advisor), Perry, Dewayne (committee member), Kim, Miryung (committee member), Longoria, Raul (committee member), Khurshid, Sarfraz (committee member).
Subjects/Keywords: Cyber physical systems; Formal verification; Runtime verification; Simulation models; Temporal logic; Timed automata; Behaviour driven development; Aspect oriented programming
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zheng, Xi, P. D. (2015). Physically informed runtime verification for cyber physical systems. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/31413
Chicago Manual of Style (16th Edition):
Zheng, Xi, Ph D. “Physically informed runtime verification for cyber physical systems.” 2015. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/31413.
MLA Handbook (7th Edition):
Zheng, Xi, Ph D. “Physically informed runtime verification for cyber physical systems.” 2015. Web. 05 Mar 2021.
Vancouver:
Zheng, Xi PD. Physically informed runtime verification for cyber physical systems. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2015. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/31413.
Council of Science Editors:
Zheng, Xi PD. Physically informed runtime verification for cyber physical systems. [Doctoral Dissertation]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/31413

University of Texas – Austin
10.
-4094-6303.
Unifying program repair and program synthesis.
Degree: PhD, Electrical and Computer Engineering, 2018, University of Texas – Austin
URL: http://hdl.handle.net/2152/68477
► The last few years have seen much progress in two related but traditionally disjoint areas of research: program repair and program synthesis. Program repair is…
(more)
▼ The last few years have seen much progress in two related but traditionally disjoint areas of research: program repair and program synthesis. Program repair is the problem of locating and removing faults in a given faulty program. Program synthesis is the problem of generating programs automatically from high-level specifications. While innovation in each of these two research areas has been impressive, the techniques developed within one area have largely been confined to that area. Our insight is that the unification of program repair and program synthesis holds a key to developing well-founded, systematic, and scalable tools for repairing complex defects. The contribution of this dissertation is three-fold: a synthesis-based approach SketchRep for program repair based on propositional satisfiability solving, an execution-driven synthesis engine EdSketch for Java, and a program repair approach SketchFix to repair defects at the AST node-level with execution-driven sketching.
SketchRep is a debugging approach that reduces the problem of program repair to a sub-problem of program synthesis, namely program sketching, in which the user writes a sketch, i.e., an incomplete program that has holes, and automated tools complete the sketch with respect to the given specification or reference implementation. Our program repair approach translates the given faulty program to a sketch and leverages an off-the-shelf inductive synthesizer to fill in the holes of the incomplete program with respect to the given test suite.
EdSketch is an execution-driven synthesis engine for Java. Traditional solutions to the sketching problem perform a translation to SAT formulas. While effective for a range of programs, when applied to real applications, such translation-based approaches may lead to impractical problems to translate all relevant libraries to SAT. Instead of transforming the program to logic formulas for SAT solvers, EdSketch explores the actual program behaviors in presence of libraries and provides a practical solution to sketching small parts of real-world applications.
SketchFix is a repair technique that generates candidate fixes on demand during the test execution. It translates faulty programs to sketches, compiles each sketch once which may represent a large number of concrete candidates, and lazily initializes the candidates of the sketches while validating them against the test execution.
The dissertation describes each technique and presents experimental results that demonstrate its efficacy.
Advisors/Committee Members: Khurshid, Sarfraz (advisor), Garg, Vijay (committee member), Julien, Christine (committee member), Perry, Dewayne (committee member), Prasad, Mukul (committee member).
Subjects/Keywords: Program repair; Program synthesis; Execution-driven; On-demand candidate generation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-4094-6303. (2018). Unifying program repair and program synthesis. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/68477
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-4094-6303. “Unifying program repair and program synthesis.” 2018. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/68477.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-4094-6303. “Unifying program repair and program synthesis.” 2018. Web. 05 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-4094-6303. Unifying program repair and program synthesis. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2018. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/68477.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-4094-6303. Unifying program repair and program synthesis. [Doctoral Dissertation]. University of Texas – Austin; 2018. Available from: http://hdl.handle.net/2152/68477
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

University of Texas – Austin
11.
Qiu, Rui, active 21st century.
Scaling and certifying symbolic execution.
Degree: PhD, Electrical and Computer engineering, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/46495
► Symbolic execution is a powerful, systematic program analysis approach that has received much visibility in the last decade. The key idea in symbolic execution is…
(more)
▼ Symbolic execution is a powerful, systematic program analysis approach that has received much visibility in the last decade. The key idea in symbolic execution is to explore all execution paths up to a bound on the path length, build path conditions that represent constraints on inputs that execute the corresponding paths, and solve the constraints using off-the-shelf constraint solvers to determine path feasibility (where possible). While systematic path exploration enables symbolic execution to find subtle bugs, scaling the approach remains a key challenge. Our thesis is that novel compositional, certifying, and distribution techniques can enhance the efficacy of symbolic execution. This dissertation designs, develops, and evaluates three techniques based on the primitives of composition, certification, and distribution in program analysis to enhance symbolic execution. Our composition technique CompoSE allows the overall symbolic execution results to be computed by composing intermediate results with respect to individual methods, rather than treating the entire program monolithically as is done traditionally. CompoSE first summarizes each method as a memoization tree that represents the key elements of symbolic execution of that method, and then uses these trees to efficiently replay the symbolic execution of the corresponding methods with respect to their calling contexts. The key novelty of CompoSE is that it allows composition in the presence of complex operations on the program heap. Our certification technique CertifiedSE allows symbolic execution analysis to be performed by one party, the producer, and utilized by another party, the consumer. The producer creates a certificate that can be checked efficiently by the consumer to validate the correctness of symbolic execution results. The key novelty of CertifiedSE is that it introduces the idea of certification in the context of symbolic execution, which enables a number of ways to enhance how symbolic execution is performed and used. Our distribution technique SynergiSE enhances symbolic execution in a novel two-fold integration approach. One, it integrates distributed analysis and constraint re-use to enhance symbolic execution using feasible ranges, which allows sharing of constraint solving results among different workers without communicating or sharing potentially large constraint databases (as required traditionally). Two, it integrates complementary techniques for test input generation, e.g., search-based generation and symbolic execution, for creating higher quality tests using unexplored ranges, which allows symbolic execution to re-use tests created by another technique for effective distribution of exploration of previously unexplored paths. The key novelty of Synergise is that it significantly reduces the amount of communication among different symbolic execution workers and enables an effective integration of heuristics-based and systematic approaches for test generation. We embody our techniques into prototypes based on the Symbolic…
Advisors/Committee Members: Khurshid, Sarfraz (advisor), Gligoric, Milos (committee member), Pasareanu, Corina S. (committee member), Perry, Dewayne E. (committee member), Yang, Guowei (committee member).
Subjects/Keywords: Symbolic execution; Composition; Certification; Distribution; Symbolic PathFinder
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Qiu, Rui, a. 2. c. (2016). Scaling and certifying symbolic execution. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/46495
Chicago Manual of Style (16th Edition):
Qiu, Rui, active 21st century. “Scaling and certifying symbolic execution.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/46495.
MLA Handbook (7th Edition):
Qiu, Rui, active 21st century. “Scaling and certifying symbolic execution.” 2016. Web. 05 Mar 2021.
Vancouver:
Qiu, Rui a2c. Scaling and certifying symbolic execution. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/46495.
Council of Science Editors:
Qiu, Rui a2c. Scaling and certifying symbolic execution. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/46495

University of Texas – Austin
12.
-6267-2468.
Navigating tradeoffs in context sharing among the Internet of Things.
Degree: PhD, Electrical and Computer engineering, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/46513
► This dissertation introduces new perspectives on the sharing context (situational information) among Internet of Things (IoT) devices having different processing power, storage capacity, communication bandwidth,…
(more)
▼ This dissertation introduces new perspectives on the sharing context (situational information) among Internet of Things (IoT) devices having different processing power, storage capacity, communication bandwidth, and energy supply. Emerging IoT applications require devices to share information about their context with one another, often over device-to-device wireless links. However, as each IoT device has different capabilities, it may also have different priorities with respect to sharing its context with other nearby devices; low- end IoT devices with limited communication bandwidth and energy supply can prioritize a small context size (and therefore a reduced burden associated with sharing context information), while high-end IoT devices can prioritize communicating context without loss in data quality. Different IoT applications can also impact the priorities; real-time applications can prioritize fast data processing times, whereas big data server applications can prioritize reduced context sizes due to required massive storage. Prioritizing entails tradeoffs. For example, reducing context size through compression requires more energy consumption; in the case of using lossy compression for even smaller output, the data quality can be degraded. In this dissertation, we explore the tradeoffs in sharing context among IoT devices. Specifically, we present our solutions in three stages; theory, implementation, and execution models. In the theory stage, we present our context sharing model using four strategies; we start with strategies that prioritize a single factor, data quality or size, then, we introduce a novel tunable strategy where users can control the tradeoff factors to meet their application’s requirements. We build a mathematical model, and we analyze and experiment with the model to assess the performance relative to tradeoff factors including size, data quality, and energy consumption. An aggregation strategy, which shows an excellent performance in size reduction and energy consumption will be our fourth strategy. In the implementation stage, we introduce a programming model for IoT devices. We stress three principles: easy availability to accessibility of core functions, simple extension to meet application demands, and portability to the other multiple platforms. We demonstrate how these considerations drive the development of the programming model by providing programming tools that realize the model; developers can use these tools to build context sharing activities into their applications. Ultimately, users’ applications will be deployed on a variety of IoT devices. In the third stage of this research, execution models, we categorize IoT devices using three models: tiny devices, mobile devices, and server/cloudlet devices, depending on how the programming tools are employed. We present how context sharing IoT applications can be developed, deployed, and executed within each of these execution models. We expect that IoT developers can benefit in creating new context sharing applications from not…
Advisors/Committee Members: Julien, Christine, D. Sc. (advisor), Khurshid, Sarfraz (committee member), Perry, Dewayne E (committee member), Tiwari, Mohit (committee member), Qiu, Lili (committee member).
Subjects/Keywords: Software engineering; Pervasive computing; Programming model; Internet of Things; M2M communication; D2D communication; Context sharing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-6267-2468. (2016). Navigating tradeoffs in context sharing among the Internet of Things. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/46513
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-6267-2468. “Navigating tradeoffs in context sharing among the Internet of Things.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/46513.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-6267-2468. “Navigating tradeoffs in context sharing among the Internet of Things.” 2016. Web. 05 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-6267-2468. Navigating tradeoffs in context sharing among the Internet of Things. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/46513.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-6267-2468. Navigating tradeoffs in context sharing among the Internet of Things. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/46513
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

University of Texas – Austin
13.
DeAngelis, David.
Encouraging expert participation in online communities.
Degree: PhD, Electrical and Computer Engineering, 2011, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2011-08-4161
► In concept, online communities allow people to access the wide range of knowledge and abilities of a heterogeneous group of users. In reality, current implementations…
(more)
▼ In concept, online communities allow people to access the wide range of knowledge and abilities of a heterogeneous group of users. In reality, current implementations of various online communities suffer from a lack of participation by the most qualified users. The participation of qualified users, or experts, is crucial to the social welfare and widespread adoption of such systems. This research proposes techniques for identifying the most valuable contributors to several classes of online communities, including question and answer (QA) forums and other content-oriented social networks. Once these target users are identified, content recommendation and novel quantitative incentives can be used to encourage their participation. This research represents an in-depth investigation into QA systems, while the major findings are widely applicable to online communities in general. An algorithm for recommending content in a QA forum is introduced which can route questions to the most appropriate responders. This increases the efficiency of the system and reduces the time investment of an expert responder by eliminating the need to search for potential questions to answer. This recommender is analyzed using real data captured from Yahoo! Answers. Additionally, an incentive mechanism for QA systems based on a novel class of incentives is developed. This mechanism relies on systemic rewards, or rewards that have tangible value within the framework of the online community. This research shows that human users have a strong preference for reciprocal systemic rewards over traditional rewards, and a simulation of a QA system based on an incentive that utilizes these reciprocal rewards outperforms a leading incentive mechanism according to expert participation. An architecture is developed for a QA system built upon content recommendation and this novel incentive mechanism. This research shows that it is possible to identify the most valuable contributors to an online community and motivate their participation through a novel incentive mechanism based on meaningful rewards.
Advisors/Committee Members: Barber, K. Suzanne (advisor), Perry, Dewayne E. (committee member), Arapostathis, Aristotle (committee member), Julien, Christine (committee member), Francisco-Revilla, Luis (committee member).
Subjects/Keywords: Social networks; Trust in multi-agent systems; User modeling; Question and answer systems; Online communities
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
DeAngelis, D. (2011). Encouraging expert participation in online communities. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2011-08-4161
Chicago Manual of Style (16th Edition):
DeAngelis, David. “Encouraging expert participation in online communities.” 2011. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2011-08-4161.
MLA Handbook (7th Edition):
DeAngelis, David. “Encouraging expert participation in online communities.” 2011. Web. 05 Mar 2021.
Vancouver:
DeAngelis D. Encouraging expert participation in online communities. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2011. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2011-08-4161.
Council of Science Editors:
DeAngelis D. Encouraging expert participation in online communities. [Doctoral Dissertation]. University of Texas – Austin; 2011. Available from: http://hdl.handle.net/2152/ETD-UT-2011-08-4161

University of Texas – Austin
14.
Zhang, Yuqun.
Modeling and predicting data for business intelligence.
Degree: PhD, Electrical and Computer engineering, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/46572
► Business intelligence is an area where data and actionable information can be analyzed and provided to make more informed business actions. In general, any technique…
(more)
▼ Business intelligence is an area where data and actionable information can be analyzed and provided to make more informed business actions. In general, any technique that contributes to better business decisions can be categorized in business intelligence techniques. Particularly, business process management is a subarea that focuses on improving business performance by managing and optimizing business processes; management data mining is a subarea that applies data mining techniques to gain better business performances. In business process management, a business process is a collection of relevant, structured activities or tasks that produce specific services or products for certain business goals. Business process modeling refers to the activities of representing intra or inter-organization processes, such that the current processes can be analyzed or improved. While abundant business process modeling techniques and their associated analyses have been proposed to capture different aspects of business processes, modern business processes can be very complicated such that many properties, such as performance optimization and evaluation, still cannot be accurately described and understood. Management data mining refers to applying data mining techniques in multiple domains of business managements, e.g., supply chain management, marketing analysis. Typical research topics include building models to provide feedbacks for skewing supply chain policies or marketing strategies. Traditional research tend to build generic models given specific scenarios, that are argued to easily cause inaccuracies with more granular disturbances. My thesis focuses on approaches handling the challenges in business process optimization and evaluation and its associated data analysis. Specifically, I propose a data-centric technique for modeling composite business activities by including components of data, human actors, and atomic activities. Furthermore, I explore this technique in two major dimensions. First, by applying this technique in work ow-based business processes, I explore the possibility of reconstructing these processes by modifying the execution order of business activities, and develop efficient algorithms to approach optimal temporal performance for data-centric business processes. Second, I build a symbolic process generator to stochastically generate symbolic data-centric business processes that can be used to analyze their properties and evaluate optimization approaches according to end-users' specification. Moreover, I zoom in a granular data type of inventory management process and build data mining models to forecast it. The major contributions of my thesis include: 1) proposing a data-centric business process modeling technique that emphasizes business artifacts compared with traditional workflow-based modeling techniques; 2) developing approaches to optimize the temporal performance of the data-centric business processes; 3) applying my symbolic process generator so that data-centric business processes can be simulated…
Advisors/Committee Members: Perry, Dewayne E. (advisor), Barber, Suzanne (committee member), Khurshid, Sarfraz (committee member), Gligoric, Milos (committee member), Yang, Guowei (committee member).
Subjects/Keywords: Business intelligence; Data modeling; Data prediction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhang, Y. (2016). Modeling and predicting data for business intelligence. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/46572
Chicago Manual of Style (16th Edition):
Zhang, Yuqun. “Modeling and predicting data for business intelligence.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/46572.
MLA Handbook (7th Edition):
Zhang, Yuqun. “Modeling and predicting data for business intelligence.” 2016. Web. 05 Mar 2021.
Vancouver:
Zhang Y. Modeling and predicting data for business intelligence. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/46572.
Council of Science Editors:
Zhang Y. Modeling and predicting data for business intelligence. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/46572

University of Texas – Austin
15.
Gopinath, Divya.
Systematic techniques for more effective fault localization and program repair.
Degree: PhD, Electrical and Computer engineering, 2015, University of Texas – Austin
URL: http://hdl.handle.net/2152/33386
► Debugging faulty code is a tedious process that is often quite expensive and can require much manual effort. Developers typically perform debugging in two key…
(more)
▼ Debugging faulty code is a tedious process that is often quite expensive and can require much manual effort. Developers typically perform debugging in two key steps: (1) fault localization, i.e., identifying the location of faulty line(s) of code; and (2) program repair, i.e., modifying the code to remove the fault(s). Automating debugging to reduce its cost has been the focus of a number of research projects during the last decade, which have introduced a variety of techniques.
However, existing techniques suffer from two basic limitations. One, they lack accuracy to handle real programs. Two, they focus on automating only one of the two key steps, thereby leaving the other key step to the developer.
Our thesis is that an approach that integrates systematic search based on state-of-the-art constraint solvers with techniques to analyze artifacts that describe application specific properties and behaviors, provides the basis for developing more effective debugging techniques. We focus on faults in programs that operate on structurally complex inputs, such as heap-allocated data or relational databases.
Our approach lays the foundation for a unified framework for localization and repair of faults in programs. We embody our thesis in a suite of integrated techniques based on propositional satisfiability solving, correctness specifications analysis, test-spectra analysis, and rule-learning algorithms from machine learning, implement them as a prototype tool-set, and evaluate them using several subject programs.
Advisors/Committee Members: Khurshid, Sarfraz (advisor), Perry, Dewayne (committee member), Pingali, Keshav (committee member), Julien, Christine (committee member), Bias, Randolph (committee member).
Subjects/Keywords: Software debugging; Program repair; Fault localization; SAT; Correctness specifications; Classifier learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gopinath, D. (2015). Systematic techniques for more effective fault localization and program repair. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/33386
Chicago Manual of Style (16th Edition):
Gopinath, Divya. “Systematic techniques for more effective fault localization and program repair.” 2015. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/33386.
MLA Handbook (7th Edition):
Gopinath, Divya. “Systematic techniques for more effective fault localization and program repair.” 2015. Web. 05 Mar 2021.
Vancouver:
Gopinath D. Systematic techniques for more effective fault localization and program repair. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2015. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/33386.
Council of Science Editors:
Gopinath D. Systematic techniques for more effective fault localization and program repair. [Doctoral Dissertation]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/33386
16.
Sullivan, Allison.
Automated testing and sketching of alloy models.
Degree: PhD, Electrical and Computer Engineering, 2017, University of Texas – Austin
URL: http://hdl.handle.net/2152/47299
► Models of software systems, e.g., designs, play an important role in the development of reliable and dependable systems. However, writing correct designs is hard. What…
(more)
▼ Models of software systems, e.g., designs, play an important role in the development of reliable and dependable systems. However, writing correct designs is hard. What makes formulating desired design properties particularly hard is the common lack of intuitive and effective techniques for validating their correctness. Despite significant advances in developing notations and languages for writing designs, techniques for validating them are often not as advanced and pose an undue burden on the users. Our thesis is that some foundational and widely used techniques in software testing – the most common methodology for validating quality of code – can provide a basis to develop a familiar and effective new approach for checking the correctness of designs. We lay a new foundation for testing designs in the traditional spirit of testing code. Our specific focus is the Alloy language, which is particularly useful for building models of software systems. Alloy is a first-order language based on relations and is supported by a SAT-based analysis tool, which provides a natural backend for building new analyses for Alloy. In recent work, we defined a foundation for testing Alloy models in the spirit of the widely practiced unit testing methodology popularized by the xUnit family of frameworks, such as JUnit for Java. Specifically, AUnit, our testing framework for Alloy, defines test cases, test outcomes (pass/fail), and model coverage, and forms a foundation that enables development of new testing techniques for Alloy. To provide a more robust validation environment for Alloy, we build on the AUnit foundation in four primary ways. One, we introduce test generation algorithms, which automate creation of test inputs, which is traditionally one of the most costly steps in testing. Two, we introduce synthesis of parts of Alloy models using sketching by enumeration and constraint checking, where the user writes a partial Alloy model and outlines the expected behavior using tests, and our sketching framework completes the Alloy model for the user. Three, we investigate optimizations to improve the efficacy of our core model sketching techniques targeting improved scalability. Four, we introduce a second approach for sketching Alloy models that incorporates attributes from our test generation efforts as well as our initial sketching framework and uses equivalent formulas to outline expected behavior rather than tests. To evaluate our techniques, we use a suite of well-studied Alloy models, including some from Alloy's standard distribution, as well as models written by graduate students as part of their homework.
Advisors/Committee Members: Khurshid, Sarfraz (advisor), Gligoric, Milos (committee member), Julien, Christine (committee member), Leonard, Elizabeth (committee member), Perry, Dewayne E (committee member).
Subjects/Keywords: Alloy; AUnit; Testing; Specifications; Program synthesis; Program sketching; Automated test generation; Coverage; Test framework; Unit tests; Declarative model; SAT
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sullivan, A. (2017). Automated testing and sketching of alloy models. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/47299
Chicago Manual of Style (16th Edition):
Sullivan, Allison. “Automated testing and sketching of alloy models.” 2017. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/47299.
MLA Handbook (7th Edition):
Sullivan, Allison. “Automated testing and sketching of alloy models.” 2017. Web. 05 Mar 2021.
Vancouver:
Sullivan A. Automated testing and sketching of alloy models. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2017. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/47299.
Council of Science Editors:
Sullivan A. Automated testing and sketching of alloy models. [Doctoral Dissertation]. University of Texas – Austin; 2017. Available from: http://hdl.handle.net/2152/47299
17.
Kim, Jongwook.
Reflective and relativistic refactoring with feature-awareness.
Degree: PhD, Computer Science, 2017, University of Texas – Austin
URL: http://hdl.handle.net/2152/47286
► Refactoring is a core technology in modern software development. It is central to popular software design movements, such as Extreme Programming [23] and Agile software…
(more)
▼ Refactoring is a core technology in modern software development. It is central to popular software design movements, such as Extreme Programming [23] and Agile software development [91], and all major Integrated Development Environments (IDEs) today offer some form of refactoring support. Despite this, refactoring engines have languished behind research. Modern IDEs offer no means to sequence refactorings to automate program changes. Further, current refactoring engines exhibit problems of speed and expressivity, which makes writing composite refactorings such as design patterns infeasible. Even worse, existing refactoring tools for Object-Oriented languages are unaware of configurations in Software Product Lines (SPLs) codebases. With this motivation in mind, this dissertation makes three contributions to address these issues: First, we present the Java API library, called R2, to script Eclipse refactorings to retrofit design patterns into existing programs. We encoded 18 out of 23 design patterns described by Gang-of-Four [57] as R2 scripts and explain why the remaining refactorings are inappropriate for refactoring engines. R2 sheds light on why refactoring speed and expressiveness are critical issues for scripting. Second, we present a new Java refactoring engine, called R3, that addresses an Achilles heel in contemporary refactoring technology, namely scripting performance. Unlike classical refactoring techniques that modify Abstract Syntax Trees (ASTs), R3 refactors programs by rendering ASTs via pretty printing. AST rendering never changes the AST; it only displays different views of the AST/program. Coupled with new ways to evaluate refactoring preconditions, R3 increases refactoring speed by an order of magnitude over Eclipse and facilitates computing views of a program where the original behavior is preserved. Third, we provide a feature-aware refactoring tool, called X15, for SPL codebases written in Java. X15 takes advantage of R3's view rendering to implement a projection technology in Feature-Oriented Software Development, which produces subprograms of the original SPL by hiding unneeded feature code. X15 is the first feature-aware refactoring tool for Java that implements a theory of refactoring feature modules, and allows users to edit and refactor SPL programs via “views”. In the most demanding experiments, X15 barely runs a second slower than R3, giving evidence that refactoring engines for SPL codebases can indeed be efficient.
Advisors/Committee Members: Batory, Don S., 1953- (advisor), Dig, Danny (committee member), Perry, Dewayne (committee member), Fussell, Donald (committee member), Gligoric, Milos (committee member).
Subjects/Keywords: Refactoring; Design pattern; Software product line; Software development; Feature-awareness; Refactoring speed; Refactoring expressiveness; Refactoring engines; Scripting performance; Feature-aware refactoring
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kim, J. (2017). Reflective and relativistic refactoring with feature-awareness. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/47286
Chicago Manual of Style (16th Edition):
Kim, Jongwook. “Reflective and relativistic refactoring with feature-awareness.” 2017. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/47286.
MLA Handbook (7th Edition):
Kim, Jongwook. “Reflective and relativistic refactoring with feature-awareness.” 2017. Web. 05 Mar 2021.
Vancouver:
Kim J. Reflective and relativistic refactoring with feature-awareness. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2017. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/47286.
Council of Science Editors:
Kim J. Reflective and relativistic refactoring with feature-awareness. [Doctoral Dissertation]. University of Texas – Austin; 2017. Available from: http://hdl.handle.net/2152/47286
18.
Hung, Wei-Lun.
Asynchronous automatic-signal monitors with multi-object synchronization.
Degree: PhD, Electrical and Computer engineering, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/46269
► One of the fundamental problems in parallel programming is that there is no simple programming paradigm that provides mutual exclusion and synchronization with efficient implementation…
(more)
▼ One of the fundamental problems in parallel programming is that there is no simple programming paradigm that provides mutual exclusion and synchronization with efficient implementation at the same time. For monitor [Hoa74,Han75] (lock-based) systems, only experienced programmers can develop high-performance fine-grained lock-based implementations. Programmers frequently introduce bugs with traditional monitors. Researchers have proposed transactional memory [HM93,ST95], which provides a simple and elegant mechanism for programmers to atomically execute a set of memory operations so that there is no deadlock in transactional memory systems. However, most of transactional memory systems lack conditional synchronization support [WLS14,LW14]. Hence, writing multi-threaded programs with conditional synchronization is rather difficult. In this dissertation, we develop a parallel programming framework that provides simple constructs for mutual exclusion and synchronization as well as efficient implementation.
Advisors/Committee Members: Garg, Vijay K. (Vijay Kumar), 1963- (advisor), Julien, Christine (committee member), Khurshid, Sarfraz (committee member), Mittal, Neeraj (committee member), Perry, Dewayne E (committee member), Pingali, Keshav (committee member).
Subjects/Keywords: Automatic signal; Implicit signal; Monitor; Synchronization; Concurrency; Parallel programming
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hung, W. (2016). Asynchronous automatic-signal monitors with multi-object synchronization. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/46269
Chicago Manual of Style (16th Edition):
Hung, Wei-Lun. “Asynchronous automatic-signal monitors with multi-object synchronization.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/46269.
MLA Handbook (7th Edition):
Hung, Wei-Lun. “Asynchronous automatic-signal monitors with multi-object synchronization.” 2016. Web. 05 Mar 2021.
Vancouver:
Hung W. Asynchronous automatic-signal monitors with multi-object synchronization. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/46269.
Council of Science Editors:
Hung W. Asynchronous automatic-signal monitors with multi-object synchronization. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/46269

University of Texas – Austin
19.
-8333-9656.
Effective bug detection and localization using information retrieval.
Degree: PhD, Electrical and Computer engineering, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/40245
► Software bugs pose a fundamental threat to the reliability of software systems, even in systems designed with the best software engineering (SE) teams using the…
(more)
▼ Software bugs pose a fundamental threat to the reliability of software systems, even in systems designed with the best software engineering (SE) teams using the best SE practices. Detecting bugs early and fixing them quickly are extremely important. However, they are very expensive and challenging, especially at-scale. While the sciences of bug detection (e.g., software testing) and localization via static and dynamic program analyses have been explored considerably, text-based Information Retrieval (IR) techniques for bug detection and localization are interesting and promising new approaches for these problems. One advantage of text-based approaches is that it can utilize a lot of (implicit) semantic information about a program’s functionality from the program text, which is almost impossible to extract using program analysis based techniques. This dissertation builds a deeper understanding of current bug triaging and fixing processes via mining software repositories, and introduces new techniques for effective bug detection and localization. The dissertation has three main parts. First, we perform a number of empirical studies to investigate the extent of and reasons for long lived bugs, their severities, and time spent in different phases of bug fixing process. We demonstrate that many bugs remain unfixed for inordinate period of time due to numerous reasons, including difficulties in detecting, localizing, and fixing them. Second, we demonstrate that developers use very similar program text in source code and their corresponding test cases, which could be utilized to implement powerful test prioritization techniques. We introduce a novel IR based regression test prioritization technique called REPiR that embodies our insight, and show that REPiR is more efficient than program analysis based or dynamic coverage based techniques. Third, we demonstrate that fine grained program text such as class names, method names, variable names, and comments carry different levels of information, and it can be utilized to improve IR based bug localization. We introduce a structured retrieval technique called BLUiR that embodies our insights and show that BLUiR outperforms the existing state-of-the-art IR-based bug localization approaches. Finally, we further improve BLUiR by natural language processing. We make four contributions in this dissertation. One, we provide empirical evidence that there are considerable numbers of non-trivial bugs in software projects that survive for a long time. We describe the reasons for delay in fixing, the nature of fixes, and overall fixing process of these long lived bugs in a great detail. Two, we introduce the notion of IR-based regression test prioritization based on program changes. Three, we introduce the notion of structured retrieval for bug localization. Four, we provide an in-depth analysis of the extent to which natural languages processing can play an important role in improving IR-based bug localization further. The central ideas are embodied in a suite of prototype tools. Rigorous…
Advisors/Committee Members: Perry, Dewayne E. (advisor), Khurshid, Sarfraz (committee member), Julien, Christine (committee member), Gligoric, Milos (committee member), Devanbu, Premkumar (committee member), Lawall, Julia (committee member).
Subjects/Keywords: Software testing; Automatic bug localization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-8333-9656. (2016). Effective bug detection and localization using information retrieval. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/40245
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-8333-9656. “Effective bug detection and localization using information retrieval.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/40245.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-8333-9656. “Effective bug detection and localization using information retrieval.” 2016. Web. 05 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-8333-9656. Effective bug detection and localization using information retrieval. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/40245.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-8333-9656. Effective bug detection and localization using information retrieval. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/40245
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
20.
Che, Meiru.
Managing architectural design decision documentation and evolution.
Degree: PhD, Electrical and Computer Engineering, 2014, University of Texas – Austin
URL: http://hdl.handle.net/2152/28389
► Software architecture provides a high-level framework for a software system, and plays an important role in achieving functional and non-functional requirements. Since the year 2004,…
(more)
▼ Software architecture provides a high-level framework for a software system, and plays an important role in achieving functional and non-functional requirements. Since the year 2004, software architecture has been considered as a set of architectural design decisions (ADDs). However, software architecture is implicit and evolves as the software development process moves forward. The implicitness together with continuous evolution leads to many problems such as architecture drift and erosion as well as high cost reconstruction. Without capturing and managing ADDs, most of existing architectural knowledge evaporates, and reusing and evolving architecture can be difficult. These problems are even more serious in global software development (GSD). This dissertation presents a novel methodology for capturing ADDs during the architecting process and managing the evolution of ADDs to reduce architectural knowledge evaporation. This methodology explicitly documents ADDs using a scenario-based approach, which covers three views of a software architecture, to record architectural knowledge, and incorporates evolution-centered characteristics to manage ADD evolution for reducing architectural knowledge evaporation. Furthermore, the dissertation presents ADD management in the context of GSD to analyze typical ADD management paradigms, and to offer insights on, techniques on, and support for sharing and coordinating ADDs in a GSD setting. This dissertation focuses on both the documentation and the evolution needs for ADDs in localized and global software development.
Advisors/Committee Members: Perry, Dewayne E. (advisor).
Subjects/Keywords: Software architecture; Architectural design decision; Architecture documentation; Architecture evolution
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Che, M. (2014). Managing architectural design decision documentation and evolution. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/28389
Chicago Manual of Style (16th Edition):
Che, Meiru. “Managing architectural design decision documentation and evolution.” 2014. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/28389.
MLA Handbook (7th Edition):
Che, Meiru. “Managing architectural design decision documentation and evolution.” 2014. Web. 05 Mar 2021.
Vancouver:
Che M. Managing architectural design decision documentation and evolution. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2014. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/28389.
Council of Science Editors:
Che M. Managing architectural design decision documentation and evolution. [Doctoral Dissertation]. University of Texas – Austin; 2014. Available from: http://hdl.handle.net/2152/28389
21.
Gopinath, Divya.
Scaling scope bounded checking using incremental approaches.
Degree: MSin Engineering, Electrical and Computer Engineering, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-05-1209
► Bounded Verification is an effective technique for finding subtle bugs in object-oriented programs. Given a program, its correctness specification and bounds on the input domain…
(more)
▼ Bounded Verification is an effective technique for finding subtle bugs in object-oriented programs. Given a program, its correctness specification and bounds on the input domain size, scope bounded checking translates bounded code segments into formulas in boolean logic and uses off the shelf satisfiability solvers to search for correctness violations. However, scalability is a key issue of the technique, since for non-trivial programs, the formulas are often complex and can choke the solvers. This thesis describes approaches which aim to scale scope bounded checking by utilizing syntactic and semantic information from the code to split a program into sub-programs which can be checked incrementally. It presents a thorough evaluation of the approaches and compares their performance with existing bounded verification techniques. Novel ideas for future work, specifically a specification slicing driven splitting approach, are proposed to further improve the scalability of bounded verification.
Advisors/Committee Members: Khurshid, Sarfraz (advisor), Perry, Dewayne (committee member).
Subjects/Keywords: Bounded verification; Incremental approach; Splitting strategies; Data flow analysis; Forge
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gopinath, D. (2010). Scaling scope bounded checking using incremental approaches. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-05-1209
Chicago Manual of Style (16th Edition):
Gopinath, Divya. “Scaling scope bounded checking using incremental approaches.” 2010. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-05-1209.
MLA Handbook (7th Edition):
Gopinath, Divya. “Scaling scope bounded checking using incremental approaches.” 2010. Web. 05 Mar 2021.
Vancouver:
Gopinath D. Scaling scope bounded checking using incremental approaches. [Internet] [Masters thesis]. University of Texas – Austin; 2010. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-05-1209.
Council of Science Editors:
Gopinath D. Scaling scope bounded checking using incremental approaches. [Masters Thesis]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-05-1209
22.
Vasquez, Roberto Mario.
A project plan for improving the performance measurement process : a usability case study.
Degree: MSin Engineering, Electrical and Computer Engineering, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-12-2186
► Many good software practices are often discarded because of the syndrome “there is not enough time, do it later”, or “it is in our head…
(more)
▼ Many good software practices are often discarded because of the syndrome “there is not enough time, do it later”, or “it is in our head and there is no time to write it down.” As a consequence, projects are late, time frames to complete software modules are unrealistic and miscalculated, and traceability to required documents and their respective stakeholders do not exist. It is not until the release of the application that it is determined the functionalities do not meet the expectations of the end users and stakeholders. The effect of this can be detrimental to the individuals of the development team and the organization. Associating measurement and metrics to internal software processes and tasks, followed by analysis and continual evaluation, are key elements to close many of the repeated gaps in the life cycle of software engineering, regardless of the software methodology.
This report presents a usability case study of a customized application during its development. The application contains internal indicator modules for performance measurement processes captured at the level of a Request System application within a horizontal organizational group. The main goals for the usability surveys and case study were
(1st) to identify, define and evaluate the current gaps in the system and
(2nd) find new approaches and strategies with the intent to move the project in the right direction.
Gaps identified throughout the development process are included as indicators for process improvement. The result of the usability case study creates new goals and gives clear direction to the project. Goal-driven measurements and the creation of a new centralized collaborative web system for communication with other teams are parts of the solution. The processes and techniques may provide benefits to companies interested in applying similar tactics to improve their own software project processes.
Advisors/Committee Members: Krasner, Herb (advisor), Perry, Dewayne E. (advisor).
Subjects/Keywords: Usability case study; GQM; Measurement metrics; Software quality; Software gaps
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Vasquez, R. M. (2010). A project plan for improving the performance measurement process : a usability case study. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-12-2186
Chicago Manual of Style (16th Edition):
Vasquez, Roberto Mario. “A project plan for improving the performance measurement process : a usability case study.” 2010. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-12-2186.
MLA Handbook (7th Edition):
Vasquez, Roberto Mario. “A project plan for improving the performance measurement process : a usability case study.” 2010. Web. 05 Mar 2021.
Vancouver:
Vasquez RM. A project plan for improving the performance measurement process : a usability case study. [Internet] [Masters thesis]. University of Texas – Austin; 2010. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-12-2186.
Council of Science Editors:
Vasquez RM. A project plan for improving the performance measurement process : a usability case study. [Masters Thesis]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-12-2186
23.
Brockley, Susan Ragaz.
Measuring customer contribution to the agile software development process : a case study.
Degree: MSin Engineering, Electrical and Computer Engineering, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-12-2214
► Agile project management and software development practices have become widely accepted in the industry and much of the currently published literature focuses on the developer's…
(more)
▼ Agile project management and software development practices have become widely accepted in the industry and much of the currently published literature focuses on the developer's uptake of the methodology. Although it is commonly known that customers play a key role in Agile project success, the extent to which they can influence a project is not as well understood. This case study measures the contribution of customer involvement to the success of Agile projects. The study demonstrates that active customer participation is one of the top three factors for successful Agile projects. It also demonstrates that successful Agile projects have customers that are "knowledgeable, committed, collaborative, representative, and empowered". Similarly, the study shows that successful Agile projects have customers who transfer domain knowledge to project team members efficiently and effectively. The study concludes with recommendations for developers and customers that maximize an Agile project's potential for success.
Advisors/Committee Members: Perry, Dewayne E. (advisor), Krasner, Herb (advisor).
Subjects/Keywords: Agile; Project management; Measurement
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Brockley, S. R. (2010). Measuring customer contribution to the agile software development process : a case study. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-12-2214
Chicago Manual of Style (16th Edition):
Brockley, Susan Ragaz. “Measuring customer contribution to the agile software development process : a case study.” 2010. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-12-2214.
MLA Handbook (7th Edition):
Brockley, Susan Ragaz. “Measuring customer contribution to the agile software development process : a case study.” 2010. Web. 05 Mar 2021.
Vancouver:
Brockley SR. Measuring customer contribution to the agile software development process : a case study. [Internet] [Masters thesis]. University of Texas – Austin; 2010. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-12-2214.
Council of Science Editors:
Brockley SR. Measuring customer contribution to the agile software development process : a case study. [Masters Thesis]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-12-2214
24.
Ramasamy Kandasamy, Manimozhian.
Efficient state space exploration for parallel test generation.
Degree: MSin Engineering, Electrical and Computer Engineering, 2009, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2009-05-131
► Automating the generation of test cases for software is an active area of research. Specification based test generation is an approach in which a formal…
(more)
▼ Automating the generation of test cases for software is an active area of research.
Specification based test generation is an approach in which a formal representation of a
method is analyzed to generate valid test cases. Constraint solving and state space
exploration are important aspects of the specification based test generation. One problem
with specification based testing is that the size of the state space explodes when we apply
this approach to a code of practical size. Hence finding ways to reduce the number of
candidates to explore within the state space is important to make this approach practical
in industry. Korat is a tool which generates test cases for Java programs based on
predicates that validate the inputs to the method. Various ongoing researches intend to
increase the tools effectiveness in handling large state space. Parallelizing Korat and
minimizing the exploration of invalid candidates are the active research directions.
This report surveys the basic algorithms of Korat, PKorat, and Fast Korat. PKorat
is a parallel version of Korat and aims to take advantage of multi-processor and multicore
systems available. Fast Korat implements four optimizations which reduce the
number of candidate explored to generate validate candidates and reduce the amount of
time required to explore each candidate. This report also presents the execution time
results for generating test candidates for binary tree, doubly linked list, and sorted singly
linked list, from their respective predicates.
Advisors/Committee Members: Khurshid, Sarfraz (advisor), Perry, Dewayne E. (committee member).
Subjects/Keywords: Automated test generation; specification based testing; Parallelizing; Korat; state space exploration; java; algorithms; data structures; MPI; TACC
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ramasamy Kandasamy, M. (2009). Efficient state space exploration for parallel test generation. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2009-05-131
Chicago Manual of Style (16th Edition):
Ramasamy Kandasamy, Manimozhian. “Efficient state space exploration for parallel test generation.” 2009. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2009-05-131.
MLA Handbook (7th Edition):
Ramasamy Kandasamy, Manimozhian. “Efficient state space exploration for parallel test generation.” 2009. Web. 05 Mar 2021.
Vancouver:
Ramasamy Kandasamy M. Efficient state space exploration for parallel test generation. [Internet] [Masters thesis]. University of Texas – Austin; 2009. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2009-05-131.
Council of Science Editors:
Ramasamy Kandasamy M. Efficient state space exploration for parallel test generation. [Masters Thesis]. University of Texas – Austin; 2009. Available from: http://hdl.handle.net/2152/ETD-UT-2009-05-131
25.
Bondalapati, Anil Kumar.
A comparative evaluation of project management tools for extreme programming.
Degree: MSin Engineering, Electrical and Computer Engineering, 2012, University of Texas – Austin
URL: http://hdl.handle.net/2152/43679
► Extreme Programming, which is commonly termed as XP, is considered to be the most widely used “agile” software methodology among all that are available. As…
(more)
▼ Extreme Programming, which is commonly termed as XP, is considered to be the most widely used “agile” software methodology among all that are available. As it is a kind of agile software development, XP promotes regular releases in brief development cycles and that is meant for improving efficiency plus introduction of checkpoints where the latest client necessities can be implemented. This research study evaluated the project management tools that are commonly used in the industry to support an XP development team. A selected set of these tools have been evaluated and their strengths and weaknesses are reported here. We also performed a gap analysis on all the tools that are available in the market. We determined the three best tools that are widely used and also to what extent these tools support the XP practices. One major conclusion from the research is that there is no single tool available that will help with all the various types of XP projects. The three tools that are most widely used cover most of the XP projects but still have some limitations in the areas of communications management, human resources management and risk management. As a result of this research our next steps will be doing more detailed evaluation of the PPTS tool with the goal of introducing the missing features. This is quite doable as PPTS is an open source tool. We also would like to do more research on the related development and testing tools that could be integrated with this tool to come up with a more functional IDE to fully support XP projects in both management and engineering practices.
Advisors/Committee Members: Perry, Dewayne E. (advisor), Krasner, Herb (committee member).
Subjects/Keywords: XP tools; XP management tools evaluation; PPTS; Project planning and tracking system
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bondalapati, A. K. (2012). A comparative evaluation of project management tools for extreme programming. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/43679
Chicago Manual of Style (16th Edition):
Bondalapati, Anil Kumar. “A comparative evaluation of project management tools for extreme programming.” 2012. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/43679.
MLA Handbook (7th Edition):
Bondalapati, Anil Kumar. “A comparative evaluation of project management tools for extreme programming.” 2012. Web. 05 Mar 2021.
Vancouver:
Bondalapati AK. A comparative evaluation of project management tools for extreme programming. [Internet] [Masters thesis]. University of Texas – Austin; 2012. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/43679.
Council of Science Editors:
Bondalapati AK. A comparative evaluation of project management tools for extreme programming. [Masters Thesis]. University of Texas – Austin; 2012. Available from: http://hdl.handle.net/2152/43679
26.
Raman, Nandita.
Benchmarking tests on recovery oriented computing.
Degree: MSin Engineering, Electrical and Computer Engineering, 2012, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2012-05-5188
► Benchmarks have played a very important role in guiding the progress of computer science systems in various ways. Specifically, in Autonomous environments it has a…
(more)
▼ Benchmarks have played a very important role in guiding the progress of computer
science systems in various ways. Specifically, in Autonomous environments it has a
major role to play. System crashes and software failures are a basic part of a software
system’s life-cycle and to overcome or rather make it as less vulnerable as possible is the
main purpose of recovery oriented computing. This is usually done by trying to reduce
the downtime by automatically and efficiently recovering from a broad class of transient
software failures without having to modify applications. There have been various types of
benchmarks for recovering from a failure, but in this paper we intend to create a
benchmark framework called the warning benchmarks to measure and evaluate the
recovery oriented systems. It consists of the known and the unknown failures and few
benchmark techniques which the warning benchmarks handle with the help of various
other techniques in software fault analysis.
Advisors/Committee Members: Perry, Dewayne E. (advisor), Krasner, Herb (committee member).
Subjects/Keywords: Benchmarking; Recovery; Software metrics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Raman, N. (2012). Benchmarking tests on recovery oriented computing. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2012-05-5188
Chicago Manual of Style (16th Edition):
Raman, Nandita. “Benchmarking tests on recovery oriented computing.” 2012. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2012-05-5188.
MLA Handbook (7th Edition):
Raman, Nandita. “Benchmarking tests on recovery oriented computing.” 2012. Web. 05 Mar 2021.
Vancouver:
Raman N. Benchmarking tests on recovery oriented computing. [Internet] [Masters thesis]. University of Texas – Austin; 2012. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2012-05-5188.
Council of Science Editors:
Raman N. Benchmarking tests on recovery oriented computing. [Masters Thesis]. University of Texas – Austin; 2012. Available from: http://hdl.handle.net/2152/ETD-UT-2012-05-5188
27.
Chocka Narayanan, Sowmiya.
Clustered Test Execution using Java PathFinder.
Degree: MSin Engineering, Electrical and Computer Engineering, 2010, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2010-05-1292
► Recent advances in test automation have seen a host of new techniques for automated test generation, which traditionally has largely been a manual and expensive…
(more)
▼ Recent advances in test automation have seen a host of new techniques
for automated test generation, which traditionally has largely been a manual and expensive process. These techniques have enabled generation of much larger numbers of tests at a much reduced cost. When executed successfully, these tests enable a significant increase in our confidence in the program's correctness. However, as our ability to generate greater numbers of tests increases, we are faced with the problem of the likely high cost of executing all the tests in terms of the total execution time.
This thesis presents a novel approach - clustered test execution - to
address this problem. Instead of executing each test case separately, we execute parts of several tests using a single execution, which then forks into several directions as the behaviors of the tests differ. Our insight is that in a large test suite, several tests are likely to have common initial execution segments, which do not have to be executed over and over again; rather such a segment could be executed once and the execution result shared across all those tests. As an enabling technology we use the Java PathFinder(JPF) model checker, which is a popular explicit-state model checker for Java programs. Experimental results show that our clustering approach for test execution using JPF provides speed-ups over executing each test in turn from a test suite on the JPF java virtual machine.
Advisors/Committee Members: Khurshid, Sarfraz (advisor), Perry, Dewayne E. (committee member).
Subjects/Keywords: Clustered Test Execution; Java PathFinder model checker
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chocka Narayanan, S. (2010). Clustered Test Execution using Java PathFinder. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-05-1292
Chicago Manual of Style (16th Edition):
Chocka Narayanan, Sowmiya. “Clustered Test Execution using Java PathFinder.” 2010. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2010-05-1292.
MLA Handbook (7th Edition):
Chocka Narayanan, Sowmiya. “Clustered Test Execution using Java PathFinder.” 2010. Web. 05 Mar 2021.
Vancouver:
Chocka Narayanan S. Clustered Test Execution using Java PathFinder. [Internet] [Masters thesis]. University of Texas – Austin; 2010. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2010-05-1292.
Council of Science Editors:
Chocka Narayanan S. Clustered Test Execution using Java PathFinder. [Masters Thesis]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-05-1292
28.
Iyer, Suchitra S.
An analytical study of metrics and refactoring.
Degree: MSin Engineering, Electrical and Computer Engineering, 2009, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2009-05-147
► Object-oriented systems that undergo repeated modifications commonly endure a loss of quality and design decay. This problem is often remedied by applying refactorings. Refactoring is…
(more)
▼ Object-oriented systems that undergo repeated modifications commonly
endure a loss of quality and design decay. This problem is often
remedied by applying refactorings. Refactoring is one of the most
important and commonly used techniques to improve the quality of the
code by eliminating redundancy and reducing complexity; frequently
refactored code is believed to be easier to understand, maintain and
test. Object-oriented metrics provide an easy means to extract useful
and measurable information about the structure of a software
system. Metrics have been used to identify refactoring opportunities,
detect refactorings that have previously been applied and gauge
quality improvements after the application of refactorings.
This thesis provides an in-depth analytical study of the relationship
between metrics and refactorings. For this purpose we analyzed 136
versions of 4 different open source projects. We used
RefactoringCrawler, an automatic refactoring detection tool to
identify refactorings and then analyzed various metrics to study
whether metrics can be used to (1) reliably identify refactoring
opportunities, (2) detect refactorings that were previously
applied, and (3) estimate the impact of refactoring on software
quality.
In conclusion, our study showed that metrics cannot be reliably used
to either identify refactoring opportunities or detect
refactorings. It is very difficult to use metrics to estimate the
impact of refactoring, however studying the evolution of metrics at a
system level indicates that refactoring does improve software quality
and reduce complexity.
Advisors/Committee Members: Perry, Dewayne E. (advisor), Kim, Miryung (committee member).
Subjects/Keywords: software maintenance; software metrics; refactoring; analytical study; software evolution
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Iyer, S. S. (2009). An analytical study of metrics and refactoring. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2009-05-147
Chicago Manual of Style (16th Edition):
Iyer, Suchitra S. “An analytical study of metrics and refactoring.” 2009. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2009-05-147.
MLA Handbook (7th Edition):
Iyer, Suchitra S. “An analytical study of metrics and refactoring.” 2009. Web. 05 Mar 2021.
Vancouver:
Iyer SS. An analytical study of metrics and refactoring. [Internet] [Masters thesis]. University of Texas – Austin; 2009. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2009-05-147.
Council of Science Editors:
Iyer SS. An analytical study of metrics and refactoring. [Masters Thesis]. University of Texas – Austin; 2009. Available from: http://hdl.handle.net/2152/ETD-UT-2009-05-147
29.
Che, Meiru.
Scenario-based architectural design decisions documentation and evolution.
Degree: MSin Engineering, Electrical and Computer Engineering, 2011, University of Texas – Austin
URL: http://hdl.handle.net/2152/ETD-UT-2011-08-4137
► Software architecture is considered as a set of architectural design decisions. Capturing and representing architectural design decisions during the architecting process is necessary for reducing…
(more)
▼ Software architecture is considered as a set of architectural design decisions. Capturing and representing architectural design decisions during the architecting process is necessary for reducing architectural knowledge evaporation. Moreover, managing the evolution of architectural design decisions helps to maintain consistency between requirements and the deployed system. In this thesis, we create the Triple View Model (TVM) as a general architecture framework for documenting architectural design decisions. The TVM clarifies the notion of architectural design decisions in three different views and covers key features of the architecting process. Based on the TVM, we propose a scenario-based methodology (SceMethod) to manage the documentation and the evolution of architectural design decisions. We also conduct a case study on an industrial project to validate the applicability and the effectiveness of the TVM and the SceMethod. The results show they provide complete documentation on architectural design decisions for creating a system architecture, and well support architecture evolution with changing requirements.
Advisors/Committee Members: Perry, Dewayne E. (advisor), Khurshid, Sarfraz (committee member).
Subjects/Keywords: Architectural design decision; Architecture documentation; Architecture evolution; Scenario; Architectural knowledge
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Che, M. (2011). Scenario-based architectural design decisions documentation and evolution. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2011-08-4137
Chicago Manual of Style (16th Edition):
Che, Meiru. “Scenario-based architectural design decisions documentation and evolution.” 2011. Masters Thesis, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/ETD-UT-2011-08-4137.
MLA Handbook (7th Edition):
Che, Meiru. “Scenario-based architectural design decisions documentation and evolution.” 2011. Web. 05 Mar 2021.
Vancouver:
Che M. Scenario-based architectural design decisions documentation and evolution. [Internet] [Masters thesis]. University of Texas – Austin; 2011. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/ETD-UT-2011-08-4137.
Council of Science Editors:
Che M. Scenario-based architectural design decisions documentation and evolution. [Masters Thesis]. University of Texas – Austin; 2011. Available from: http://hdl.handle.net/2152/ETD-UT-2011-08-4137
30.
Bhattacharya, Sutirth.
Architectural metrics and evaluation for component based software systems.
Degree: PhD, Electrical and computer engineering, 2006, University of Texas – Austin
URL: http://hdl.handle.net/2152/29552
► Component based software engineering has been perceived to have immense reuse potential. This area has evoked wide interest and has led to considerable investment in…
(more)
▼ Component based software engineering has been perceived to have immense reuse potential. This area has evoked wide interest and has led to considerable investment in research and development efforts. Most of these investigations have explored internal characteristics of software components such as correctness, reliability, modularity, interoperability, understandability, maintainability, readability, portability and generality for promoting reuse. But experience over the past decade and a half has demonstrated that the usefulness of a component depends as much on the context into which it fits as it does on the internal characteristics of the component. Software architecture descriptions that take into account the requirements of the domain can be used to serve as this context. While the
Perry, Wolf definition of software architecture has been widely acknowledged, a number of architectural description languages (ADL) have emerged that aim to capture various facets of a software, using varying degrees of formalism. There is currently no agreement towards a standard approach for documenting software architectures which would help define the vocabulary for architectural semantics. In spite of lack of any specification standards for components, Software Product Lines (SPL) and Commercial Off The Shelf (COTS) components do provide a rich supporting base for creating software architectures and promise significant improvements in the quality of software configurations that can be composed from pre-built components. However, further research is needed for evaluation of architectural merits of such component based configurations. In this research, we identify the key aspects of software that need to be specified to enable useful analysis at an architectural level. We also propose a set of metrics that enable objective evaluation of reusability potential. Architectural research has established that software architectural styles provide a way for achieving a desired coherence for component-based architectures. Different architectural styles enforce different quality attributes for a system. Thus, if the architectural style of an emergent system could be predicted, a person playing the role of a system integrator could make necessary changes to ensure that the quality attributes dictated by the system requirements were satisfied before the actual system is built and deployed, thus somewhat mitigating project risks. As part of this research, we propose a model for predicting architectural styles based on use cases that need to be satisfied by a system configuration and demonstrate how our approach can be used to determine stylistic conformance. We also propose objective methods for assessing architectural divergence, erosion and drift during system evolution and maintenance.
Advisors/Committee Members: Perry, Dewayne E. (advisor).
Subjects/Keywords: Component based software; Architectural metrics; Software architecture; Architectural description languages; ADL; Software Product Lines; Commercial Off The Shelf
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bhattacharya, S. (2006). Architectural metrics and evaluation for component based software systems. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/29552
Chicago Manual of Style (16th Edition):
Bhattacharya, Sutirth. “Architectural metrics and evaluation for component based software systems.” 2006. Doctoral Dissertation, University of Texas – Austin. Accessed March 05, 2021.
http://hdl.handle.net/2152/29552.
MLA Handbook (7th Edition):
Bhattacharya, Sutirth. “Architectural metrics and evaluation for component based software systems.” 2006. Web. 05 Mar 2021.
Vancouver:
Bhattacharya S. Architectural metrics and evaluation for component based software systems. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2006. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2152/29552.
Council of Science Editors:
Bhattacharya S. Architectural metrics and evaluation for component based software systems. [Doctoral Dissertation]. University of Texas – Austin; 2006. Available from: http://hdl.handle.net/2152/29552
◁ [1] [2] ▶
.