You searched for subject:(Software Verification)
.
Showing records 1 – 30 of
312 total matches.
◁ [1] [2] [3] [4] [5] … [11] ▶

McMaster University
1.
Zhong, Hongsheng.
Secure and Trusted Partial White-box Verification Based on Garbled Circuits.
Degree: MSc, 2016, McMaster University
URL: http://hdl.handle.net/11375/20551
► Verification is a process that checks whether a program G, implemented by a devel- oper, correctly complies with the corresponding requirement specifications. A verifier, whose…
(more)
▼ Verification is a process that checks whether a program G, implemented by a devel- oper, correctly complies with the corresponding requirement specifications. A verifier, whose interests may be different from the developer, will conduct such verification on G. However, as the developer and the verifier distrust each other probably, either of them may exhibit harmful behavior and take advantage of the verification. Generally, the developer hopes to protect the content privacy of the program, while the verifier wants to conduct effective verification to detect the possible errors. Therefore, a ques- tion inevitably arises: How to conduct an effective and efficient kind of verification, without breaking the security requirements of the two parties?
We treat verification as a process akin to testing, i.e. verifying the design with test cases and checking the results. In order to make the verification more effective, we get rid of the limitations in traditional testing approaches, like black-box and white-box testing, and propose the “partial white-box verification”.
Taking circuits as the description means, we regard the program as a circuit graph. Making the structure of the graph public, we manage to make the verification process in such a graph partially white-box. Via garbled circuits, commitment schemes and other techniques, the security requirements in such verification are guaranteed.
Thesis
Master of Science (MSc)
Advisors/Committee Members: Karakostas, George, Wassyng, Alan, Computing and Software.
Subjects/Keywords: Cryptography; Software Engineering; Software Verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhong, H. (2016). Secure and Trusted Partial White-box Verification Based on Garbled Circuits. (Masters Thesis). McMaster University. Retrieved from http://hdl.handle.net/11375/20551
Chicago Manual of Style (16th Edition):
Zhong, Hongsheng. “Secure and Trusted Partial White-box Verification Based on Garbled Circuits.” 2016. Masters Thesis, McMaster University. Accessed March 07, 2021.
http://hdl.handle.net/11375/20551.
MLA Handbook (7th Edition):
Zhong, Hongsheng. “Secure and Trusted Partial White-box Verification Based on Garbled Circuits.” 2016. Web. 07 Mar 2021.
Vancouver:
Zhong H. Secure and Trusted Partial White-box Verification Based on Garbled Circuits. [Internet] [Masters thesis]. McMaster University; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/11375/20551.
Council of Science Editors:
Zhong H. Secure and Trusted Partial White-box Verification Based on Garbled Circuits. [Masters Thesis]. McMaster University; 2016. Available from: http://hdl.handle.net/11375/20551

Rice University
2.
Song, Daniel W.
The Design and Implementation of a Verified File System with End-to-End Data Integrity.
Degree: PhD, Engineering, 2020, Rice University
URL: http://hdl.handle.net/1911/108425
► Despite significant research and engineering efforts, many of today's important computer systems suffer from bugs. To increase the reliability of software systems, recent work has…
(more)
▼ Despite significant research and engineering efforts, many of today's
important computer systems suffer from bugs.
To increase the reliability of
software systems, recent work has
applied formal
verification to certify the correctness of
such systems, with recent successes including certified file systems
and certified cryptographic protocols, albeit using quite different
proof tactics and toolchains. Unifying these concepts, we present the
first certified file system that uses cryptographic primitives to
protect itself against tampering. Our certified file system defends
against adversaries that might wish to tamper with the raw disk. Such
an ``untrusted storage'' threat model captures the behavior of storage
devices that might silently return erroneous bits as well as
adversaries who might have limited access to a disk, perhaps while in
transit.
In this paper, we present IFSCQ, a certified cryptographic file system with strong integrity guarantees.
IFSCQ combines and extends work on cryptographic file systems and formally certified file systems to prove that our design is correct.
It is the first certified file system that is secure against strong adversaries that can maliciously corrupt on-disk data and metadata, including attempting to roll back the disk to earlier versions of valid data.
IFSCQ achieves this by constructing a Merkle hash tree of the whole disk, and by proving that tampered disk blocks will always be detected if they ever occur.
We demonstrate that IFSCQ runs with reasonable overhead while detecting several kinds of attacks.
Advisors/Committee Members: Wallach, Dan S (advisor).
Subjects/Keywords: software security; systems; formal verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Song, D. W. (2020). The Design and Implementation of a Verified File System with End-to-End Data Integrity. (Doctoral Dissertation). Rice University. Retrieved from http://hdl.handle.net/1911/108425
Chicago Manual of Style (16th Edition):
Song, Daniel W. “The Design and Implementation of a Verified File System with End-to-End Data Integrity.” 2020. Doctoral Dissertation, Rice University. Accessed March 07, 2021.
http://hdl.handle.net/1911/108425.
MLA Handbook (7th Edition):
Song, Daniel W. “The Design and Implementation of a Verified File System with End-to-End Data Integrity.” 2020. Web. 07 Mar 2021.
Vancouver:
Song DW. The Design and Implementation of a Verified File System with End-to-End Data Integrity. [Internet] [Doctoral dissertation]. Rice University; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1911/108425.
Council of Science Editors:
Song DW. The Design and Implementation of a Verified File System with End-to-End Data Integrity. [Doctoral Dissertation]. Rice University; 2020. Available from: http://hdl.handle.net/1911/108425

University of Waterloo
3.
Wu, Chun Wah Wallace.
Methods for Reducing Monitoring Overhead in Runtime Verification.
Degree: 2013, University of Waterloo
URL: http://hdl.handle.net/10012/7215
► Runtime verification is a lightweight technique that serves to complement existing approaches, such as formal methods and testing, to ensure system correctness. In runtime verification,…
(more)
▼ Runtime verification is a lightweight technique that serves to complement existing approaches, such as formal methods and testing, to ensure system correctness. In runtime verification, monitors are synthesized to check a system at run time against a set of properties the system is expected to satisfy. Runtime verification may be used to determine software faults before and after system deployment. The monitor(s) can be synthesized to notify, steer and/or perform system recovery from detected software faults at run time.
The research and proposed methods presented in this thesis aim to reduce the monitoring overhead of runtime verification in terms of memory and execution time by leveraging time-triggered techniques for monitoring system events. Traditionally, runtime verification frameworks employ event-triggered monitors, where the invocation of the monitor occurs after every system event. Because systems events can be sporadic or bursty in nature, event-triggered monitoring behaviour is difficult to predict. Time-triggered monitors, on the other hand, periodically preempt and process system events, making monitoring behaviour predictable. However, software system state reconstruction is not guaranteed (i.e., missed state changes/events between samples).
The first part of this thesis analyzes three heuristics that efficiently solve the NP-complete problem of minimizing the amount of memory required to store system state changes to guarantee accurate state reconstruction. The experimental results demonstrate that adopting near-optimal algorithms do not greatly change the memory consumption and execution time of monitored programs; hence, NP-completeness is likely not an obstacle for time-triggered runtime verification. The second part of this thesis introduces a novel runtime verification technique called hybrid runtime verification. Hybrid runtime verification enables the monitor to toggle between event- and time-triggered modes of operation. The aim of this approach is to reduce the overall runtime monitoring overhead with respect to execution time. Minimizing the execution time overhead by employing hybrid runtime verification is not in NP. An integer linear programming heuristic is formulated to determine near-optimal hybrid monitoring schemes. Experimental results show that the heuristic typically selects monitoring schemes that are equal to or better than naively selecting exclusively one operation mode for monitoring.
Subjects/Keywords: software; runtime verification; runtime monitoring
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wu, C. W. W. (2013). Methods for Reducing Monitoring Overhead in Runtime Verification. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/7215
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Wu, Chun Wah Wallace. “Methods for Reducing Monitoring Overhead in Runtime Verification.” 2013. Thesis, University of Waterloo. Accessed March 07, 2021.
http://hdl.handle.net/10012/7215.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Wu, Chun Wah Wallace. “Methods for Reducing Monitoring Overhead in Runtime Verification.” 2013. Web. 07 Mar 2021.
Vancouver:
Wu CWW. Methods for Reducing Monitoring Overhead in Runtime Verification. [Internet] [Thesis]. University of Waterloo; 2013. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10012/7215.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Wu CWW. Methods for Reducing Monitoring Overhead in Runtime Verification. [Thesis]. University of Waterloo; 2013. Available from: http://hdl.handle.net/10012/7215
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Iowa State University
4.
Awadhutkar, Payas.
Human-centric verification for software safety and security.
Degree: 2020, Iowa State University
URL: https://lib.dr.iastate.edu/etd/18009
► Software forms a critical part of our lives today. Verifying software to avoid violations of safety and security properties is a necessary task. It is…
(more)
▼ Software forms a critical part of our lives today. Verifying software to avoid violations of safety and security properties is a necessary task. It is also imperative to have an assurance that the verification process was correct. We propose a human-centric approach to software verification. This involves enabling human-machine collaboration to detect vulnerabilities and to prove the correctness of the verification.
We discuss two classes of vulnerabilities. The first class is Algorithmic Complexity Vulnerabilities (ACV). ACVs are a class of software security vulnerabilities that cause denial-of-service attacks. The description of an ACV is not known a priori. The problem is equivalent to searching for a needle in the haystack when we don't know what the needle looks like. We present a novel approach to detect ACVs in web applications. We present a case study audit from DARPA's Space/Time Analysis for Cybersecurity (STAC) program to illustrate our approach.
The second class of vulnerabilities is Memory Leaks. Although the description of the Memory Leak (ML) problem is known, a proof of the correctness of the verification is needed to establish trust in the results. We present an approach inspired by the works of Alan Perlis to compute evidence of the verification which can be scrutinized by a human to prove the correctness of the verification. We present a novel abstraction, the Evidence Graph, that succinctly captures the verification evidence and show how to compute the evidence. We evaluate our approach against ML instances in the Linux kernel and report improvement over the state-of-the-art results. We also present two case studies to illustrate how the Evidence Graph can be used to prove the correctness of the verification.
Subjects/Keywords: Software Analysis; Software Assurance; Software Safety; Software Security; Software Verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Awadhutkar, P. (2020). Human-centric verification for software safety and security. (Thesis). Iowa State University. Retrieved from https://lib.dr.iastate.edu/etd/18009
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Awadhutkar, Payas. “Human-centric verification for software safety and security.” 2020. Thesis, Iowa State University. Accessed March 07, 2021.
https://lib.dr.iastate.edu/etd/18009.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Awadhutkar, Payas. “Human-centric verification for software safety and security.” 2020. Web. 07 Mar 2021.
Vancouver:
Awadhutkar P. Human-centric verification for software safety and security. [Internet] [Thesis]. Iowa State University; 2020. [cited 2021 Mar 07].
Available from: https://lib.dr.iastate.edu/etd/18009.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Awadhutkar P. Human-centric verification for software safety and security. [Thesis]. Iowa State University; 2020. Available from: https://lib.dr.iastate.edu/etd/18009
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Stellenbosch University
5.
Rajaona, Solofomampionona Fortunat.
An algebraic framework for reasoning about security.
Degree: MSc, Mathematical Sciences, 2013, Stellenbosch University
URL: http://hdl.handle.net/10019/9983
► ENGLISH ABSTRACT: Stepwise development of a program using refinement ensures that the program correctly implements its requirements. The specification of a system is “refined” incrementally…
(more)
▼ ENGLISH ABSTRACT: Stepwise development of a program using refinement ensures that the program
correctly implements its requirements. The specification of a system is
“refined” incrementally to derive an implementable program. The programming
space includes both specifications and implementable code, and is ordered with
the refinement relation which obeys some mathematical laws. Morgan proposed a
modification of this “classical” refinement for systems where the confidentiality of
some information is critical. Programs distinguish between “hidden” and “visible”
variables and refinement has to bear some security requirement. First, we review
refinement for classical programs and present Morgan’s approach for ignorance pre-
serving refinement. We introduce the Shadow Semantics, a programming model
that captures essential properties of classical refinement while preserving the ignorance
of hidden variables. The model invalidates some classical laws which do
not preserve security while it satisfies new laws. Our approach will be algebraic,
we propose algebraic laws to describe the properties of ignorance preserving refinement.
Thus completing the laws proposed in. Moreover, we show
that the laws are sound in the Shadow Semantics. Finally, following the approach of Hoare and He for classical programs, we give a completeness result for the
program algebra of ignorance preserving refinement.
AFRIKAANSE OPSOMMING: Stapsgewyse ontwikkeling van ’n program met behulp van verfyning verseker dat
die program voldoen aan die vereistes. Die spesifikasie van ’n stelsel word geleidelik
”verfyn” wat lei tot ’n implementeerbare kode, en word georden met ‘n
verfyningsverhouding wat wiskundige wette gehoorsaam. Morgan stel ’n wysiging
van hierdie klassieke verfyning voor vir stelsels waar die vertroulikheid van
sekere inligting van kritieke belang is. Programme onderskei tussen ”verborgeën
”sigbare” veranderlikes en verfyning voldoen aan ’n paar sekuriteitsvereistes. Eers
hersien ons verfyning vir klassieke programme en verduidelik Morgan se benadering
tot onwetendheid behoud. Ons verduidelik die ”Shadow Semantics”, ’n programmeringsmodel
wat die noodsaaklike eienskappe van klassieke verfyning omskryf
terwyl dit die onwetendheid van verborge veranderlikes laat behoue bly. Die model
voldoen nie aan n paar klassieke wette, wat nie sekuriteit laat behoue bly nie,
en dit voldoen aan nuwe wette. Ons benadering sal algebraïese wees. Ons stel
algebraïese wette voor om die eienskappe van onwetendheid behoudende verfyning
te beskryf, wat dus die wette voorgestel in voltooi. Verder wys ons dat die wette konsekwent is in die ”Shadow Semantics”. Ten slotte, na aanleiding
van die benadering in vir klassieke programme, gee ons ’n volledigheidsresultaat
vir die program algebra van onwetendheid behoudende verfyning.
Advisors/Committee Members: Sanders, J. W., Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences..
Subjects/Keywords: Mathematics; Computer software – Development; Computer software – Verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rajaona, S. F. (2013). An algebraic framework for reasoning about security. (Masters Thesis). Stellenbosch University. Retrieved from http://hdl.handle.net/10019/9983
Chicago Manual of Style (16th Edition):
Rajaona, Solofomampionona Fortunat. “An algebraic framework for reasoning about security.” 2013. Masters Thesis, Stellenbosch University. Accessed March 07, 2021.
http://hdl.handle.net/10019/9983.
MLA Handbook (7th Edition):
Rajaona, Solofomampionona Fortunat. “An algebraic framework for reasoning about security.” 2013. Web. 07 Mar 2021.
Vancouver:
Rajaona SF. An algebraic framework for reasoning about security. [Internet] [Masters thesis]. Stellenbosch University; 2013. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10019/9983.
Council of Science Editors:
Rajaona SF. An algebraic framework for reasoning about security. [Masters Thesis]. Stellenbosch University; 2013. Available from: http://hdl.handle.net/10019/9983

University of Illinois – Urbana-Champaign
6.
Pek, Edgar.
Automated deductive verification of systems software.
Degree: PhD, Computer Science, 2015, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/88993
► Software has become an integral part of our everyday lives, and so is our reliance on his correct functioning. Systems software lies at the heart…
(more)
▼ Software has become an integral part of our everyday lives, and so is our reliance on his correct functioning. Systems
software lies at the heart of computer systems, consequently ensuring its reliability and security is of paramount importance. This thesis explores automated deductive
verification for increasing reliability and security of systems
software. The thesis is comprised of the three main threads. The first thread describes how the state-of-the art deductive
verification techniques can help in developing more secure operating system. We have developed a prototype of an Android-based operating system with strong assurance guarantees. Operating systems code heavily relies on mutable data structures. In our experience, reasoning about such pointer-manipulating programs was the hardest aspect of the operating system
verification effort because correctness criteria describes intricate combinations of structure (shape), content (data), and separation. Thus, in the second thread, we explore design and development of an automated
verification system for assuring correctness of pointer-manipulating programs using an extension of Hoare’s logic for reasoning about programs that access and update heap allocated data-structures. We have developed a
verification framework that allows reasoning about C programs using only domain specific code annotations. The same thread contains a novel idea that enables efficient runtime checking of assertions that can express properties of dynamically manipulated linked-list data structures. Finally, we describe the work that paves a new way for reasoning about distributed protocols. We propose certified program models, where an executable language (such as C) is used for modelling – an executable language enables testing, and emerging program verifiers for mainstream executable languages enable certification of such models. As an instance of this approach, concurrent C code is used for modelling and a program verifier for concurrent C (VCC from Microsoft Research) is used for certification of new class of systems
software that serves as a backbone for efficient distributed data storage.
Advisors/Committee Members: Parthasarathy, Madhusudan (advisor), Parthasarathy, Madhusudan (Committee Chair), King, Samuel T (committee member), Ball, Thomas (committee member), Rosu, Grigore (committee member).
Subjects/Keywords: software verification; software security; automated deductive reasoning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Pek, E. (2015). Automated deductive verification of systems software. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/88993
Chicago Manual of Style (16th Edition):
Pek, Edgar. “Automated deductive verification of systems software.” 2015. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed March 07, 2021.
http://hdl.handle.net/2142/88993.
MLA Handbook (7th Edition):
Pek, Edgar. “Automated deductive verification of systems software.” 2015. Web. 07 Mar 2021.
Vancouver:
Pek E. Automated deductive verification of systems software. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2015. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/2142/88993.
Council of Science Editors:
Pek E. Automated deductive verification of systems software. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2015. Available from: http://hdl.handle.net/2142/88993

University of Minnesota
7.
Lee, Young Sub.
Evaluation of the Benefits and Limitations of Verification Activities in Developing a Critical System using Model-Based Development.
Degree: M.S.E.E., Electrical Engineering, 2018, University of Minnesota
URL: http://hdl.handle.net/11299/200995
► In developing safety-critical cyber-physical systems, model-based development (MBD) promotes design and verification activities at the model-level, which is an abstract description of the behavior of…
(more)
▼ In developing safety-critical cyber-physical systems, model-based development (MBD) promotes design and verification activities at the model-level, which is an abstract description of the behavior of the software to be implemented. MBD tool suites such as MATLAB's Simulink and Stateflow implements the principles of MBD so that system developers can easily incorporate various verification and validation activities such as modeling, simulation, testing, and even formal verification. Among all activities involved in MBD, simulation and model-level testing are widely adopted to identify bugs in the early phase of the model development. Formal verification of the design model, on the other hand, is often considered difficult or impractical so it is less widely adopted in the practice. This report investigates the practicality, limitations, and benefits of adopting various model-based V\&V activities in developing a safety-critical cyber-physical system. We have developed a rocket-launch control system in cooperation with a local rocketry club following MBD principles. More specifically, we developed a Simulink model, performed model-level simulation, testing, and model-level property verification of three safety properties. In each phase of the development, we demonstrate that each activity can be easily incorporated into the development process. The verification result of the Simulink Design Verifier shows that our safety properties of concern hold, with which we could gain confidence in the correctness of the model. This report will help people who design a critical system using MBD to chose the verification techniques that they need.
Subjects/Keywords: Model-Based Development; Software Development; Software Verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lee, Y. S. (2018). Evaluation of the Benefits and Limitations of Verification Activities in Developing a Critical System using Model-Based Development. (Masters Thesis). University of Minnesota. Retrieved from http://hdl.handle.net/11299/200995
Chicago Manual of Style (16th Edition):
Lee, Young Sub. “Evaluation of the Benefits and Limitations of Verification Activities in Developing a Critical System using Model-Based Development.” 2018. Masters Thesis, University of Minnesota. Accessed March 07, 2021.
http://hdl.handle.net/11299/200995.
MLA Handbook (7th Edition):
Lee, Young Sub. “Evaluation of the Benefits and Limitations of Verification Activities in Developing a Critical System using Model-Based Development.” 2018. Web. 07 Mar 2021.
Vancouver:
Lee YS. Evaluation of the Benefits and Limitations of Verification Activities in Developing a Critical System using Model-Based Development. [Internet] [Masters thesis]. University of Minnesota; 2018. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/11299/200995.
Council of Science Editors:
Lee YS. Evaluation of the Benefits and Limitations of Verification Activities in Developing a Critical System using Model-Based Development. [Masters Thesis]. University of Minnesota; 2018. Available from: http://hdl.handle.net/11299/200995

University of Oxford
8.
Lewis, Matt.
Precise verification of C programs.
Degree: PhD, 2014, University of Oxford
URL: http://ora.ox.ac.uk/objects/uuid:34b5ed5a-160b-4e2c-8dac-eab62a24f78c
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.664841
► Most current approaches to software verification are one-sided – a safety prover will try to prove that a program is safe, while a bug-finding tool…
(more)
▼ Most current approaches to software verification are one-sided – a safety prover will try to prove that a program is safe, while a bug-finding tool will try to find bugs. It is rare to find an analyser that is optimised for both tasks, which is problematic since it is hard to know in advance whether a program you wish to analyse is safe or not. The result of taking a one-sided approach to verification is false alarms: safety provers will often claim that safe programs have errors, while bug-finders will often be unable to find errors in unsafe programs. Orthogonally, many software verifiers are designed for reasoning about idealised programming languages that may not have widespread use. A common assumption made by verification tools is that program variables can take arbitrary integer values, while programs in most common languages use fixed-width bitvectors for their variables. This can have a real impact on the verification, leading to incorrect claims by the verifier. In this thesis we will show that it is possible to analyse C programs without generating false alarms, even if they contain unbounded loops, use non-linear arithmetic and have integer overflows. To do this, we will present two classes of analysis based on underapproximate loop acceleration and second-order satisfiability respectively. Underapproximate loop acceleration addresses the problem of finding deep bugs. By finding closed forms for loops, we show that deep bugs can be detected without unwinding the program and that this can be done without introducing false positives or masking errors. We then show that programs accelerated in this way can be optimised by inlining trace automata to reduce their reachability diameter. This inlining allows acceleration to be used as a viable technique for proving safety, as well as finding bugs. In the second part of the thesis, we focus on using second-order logic for program analysis. We begin by defining second-order SAT: an extension of propositional SAT that allows quantification over functions. We show that this problem is NEXPTIME-complete, and that it is polynomial time reducible to finite-state program synthesis. We then present a fully automatic, sound and complete algorithm for synthesising C programs from a specification written in C. Our approach uses a combination of bounded model checking, explicit-state model checking and genetic programming to achieve surprisingly good performance for a problem with such high complexity. We conclude by using second-order SAT to precisely and directly encode several program analysis problems including superoptimisation, de-obfuscation, safety and termination for programs using bitvector arithmetic and dynamically allocated lists.
Subjects/Keywords: 005.3; Theory and automated verification; Software verification; program synthesis; software engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lewis, M. (2014). Precise verification of C programs. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:34b5ed5a-160b-4e2c-8dac-eab62a24f78c ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.664841
Chicago Manual of Style (16th Edition):
Lewis, Matt. “Precise verification of C programs.” 2014. Doctoral Dissertation, University of Oxford. Accessed March 07, 2021.
http://ora.ox.ac.uk/objects/uuid:34b5ed5a-160b-4e2c-8dac-eab62a24f78c ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.664841.
MLA Handbook (7th Edition):
Lewis, Matt. “Precise verification of C programs.” 2014. Web. 07 Mar 2021.
Vancouver:
Lewis M. Precise verification of C programs. [Internet] [Doctoral dissertation]. University of Oxford; 2014. [cited 2021 Mar 07].
Available from: http://ora.ox.ac.uk/objects/uuid:34b5ed5a-160b-4e2c-8dac-eab62a24f78c ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.664841.
Council of Science Editors:
Lewis M. Precise verification of C programs. [Doctoral Dissertation]. University of Oxford; 2014. Available from: http://ora.ox.ac.uk/objects/uuid:34b5ed5a-160b-4e2c-8dac-eab62a24f78c ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.664841

University of Oxford
9.
Ramsay, Steven J.
Intersection types and higer-order model checking.
Degree: PhD, 2014, University of Oxford
URL: http://ora.ox.ac.uk/objects/uuid:46b7bc70-3dfe-476e-92e7-245b7629ae4e
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596036
► Higher-order recursion schemes are systems of equations that are used to define finite and infinite labelled trees. Since, as Ong has shown, the trees defined…
(more)
▼ Higher-order recursion schemes are systems of equations that are used to define finite and infinite labelled trees. Since, as Ong has shown, the trees defined have a decidable monadic second order theory, recursion schemes have drawn the attention of research in program verification, where they sit naturally as a higher-order, functional analogue of Boolean programs. Driven by applications, fragments have been studied, algorithms developed and extensions proposed; the emerging theme is called higher-order model checking. Kobayashi has pioneered an approach to higher-order model checking using intersection types, from which many recent advances have followed. The key is a characterisation of model checking as a problem of intersection type assignment. This dissertation contributes to both the theory and practice of the intersection type approach. A new, fixed-parameter polynomial-time decision procedure is described for the alternating trivial automaton fragment of higher-order model checking. The algorithm uses a novel, type-directed form of abstraction refinement, in which behaviours of the scheme are distinguished according to the intersection types that they inhabit. Furthermore, by using types to reason about acceptance and rejection simultaneously, the algorithm is able to converge on a solution from two sides. An implementation, Preface, and an extensive body of evidence demonstrate empirically that the algorithm scales well to schemes of several thousand rules. A comparison with other tools on benchmarks derived from current practice and the related literature puts it well beyond the state-of-the-art. A generalisation of the intersection type approach is presented in which higher-order model checking is seen as an instance of exact abstract interpretation. Intersection type assignment is used to characterise a general class of safety checking problems, defined independently of any particular representation (such as automata) for a class of recursion schemes built over arbitrary constants. Decidability of any problem in the class is an immediate corollary. Moreover, the work looks beyond whole-program verification, the traditional territory of model checking, by giving a natural treatment of higher-type properties, which are sets of functions.
Subjects/Keywords: 005.3; Theory and automated verification; software verification; intersection types; model checking
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ramsay, S. J. (2014). Intersection types and higer-order model checking. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:46b7bc70-3dfe-476e-92e7-245b7629ae4e ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596036
Chicago Manual of Style (16th Edition):
Ramsay, Steven J. “Intersection types and higer-order model checking.” 2014. Doctoral Dissertation, University of Oxford. Accessed March 07, 2021.
http://ora.ox.ac.uk/objects/uuid:46b7bc70-3dfe-476e-92e7-245b7629ae4e ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596036.
MLA Handbook (7th Edition):
Ramsay, Steven J. “Intersection types and higer-order model checking.” 2014. Web. 07 Mar 2021.
Vancouver:
Ramsay SJ. Intersection types and higer-order model checking. [Internet] [Doctoral dissertation]. University of Oxford; 2014. [cited 2021 Mar 07].
Available from: http://ora.ox.ac.uk/objects/uuid:46b7bc70-3dfe-476e-92e7-245b7629ae4e ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596036.
Council of Science Editors:
Ramsay SJ. Intersection types and higer-order model checking. [Doctoral Dissertation]. University of Oxford; 2014. Available from: http://ora.ox.ac.uk/objects/uuid:46b7bc70-3dfe-476e-92e7-245b7629ae4e ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596036

Princeton University
10.
Savary Bélanger, Olivier.
Verified Extraction for Coq
.
Degree: PhD, 2019, Princeton University
URL: http://arks.princeton.edu/ark:/88435/dsp01zw12z817f
► Interactive theorem provers allow for the development, in the same environment, of programs and of proofs about them. The programmatic portion of the development can…
(more)
▼ Interactive theorem provers allow for the development, in the same environment, of
programs and of proofs about them. The programmatic portion of the development
can then be extracted to code which is then compiled into an executable. However,
unless both the extraction and compilation processes are formally verified, one has
no guarantees that the proofs developed still apply to the resulting executable. This
thesis describes my work on CertiCoq, a verified extraction pipeline for the Coq
theorem prover composing with the CompCert C verified compiler to achieve end-to-end correctness guarantees.
I present a proof framework to prove optimizations over the continuation-passing
style (CPS) intermediate representation (IR) used in CertiCoq. This framework has
been used by me and others to prove the correctness of nontrivial optimizations. I
focus on a novel proof of correctness for a shrink reduction algorithm, a transformation
combining in a single pass multiple optimizations which always result in smaller terms.
I also present a verified code generation translating the CPS IR into Clight, a
front-end language of CompCert. I show how it interfaces with a verified garbage
collector and how its proof composes with the proof of correctness of CompCert.
Taken together, this thesis shows how carefully crafted intermediate languages
facilitate
verification effort in the context of an optimizing compiler.
Advisors/Committee Members: Appel, Andrew W (advisor).
Subjects/Keywords: Compilers;
Extraction;
Formal software verification;
Functional Languages;
Program verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Savary Bélanger, O. (2019). Verified Extraction for Coq
. (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01zw12z817f
Chicago Manual of Style (16th Edition):
Savary Bélanger, Olivier. “Verified Extraction for Coq
.” 2019. Doctoral Dissertation, Princeton University. Accessed March 07, 2021.
http://arks.princeton.edu/ark:/88435/dsp01zw12z817f.
MLA Handbook (7th Edition):
Savary Bélanger, Olivier. “Verified Extraction for Coq
.” 2019. Web. 07 Mar 2021.
Vancouver:
Savary Bélanger O. Verified Extraction for Coq
. [Internet] [Doctoral dissertation]. Princeton University; 2019. [cited 2021 Mar 07].
Available from: http://arks.princeton.edu/ark:/88435/dsp01zw12z817f.
Council of Science Editors:
Savary Bélanger O. Verified Extraction for Coq
. [Doctoral Dissertation]. Princeton University; 2019. Available from: http://arks.princeton.edu/ark:/88435/dsp01zw12z817f
11.
Moore, Brandon Michael.
Coinductive program verification.
Degree: PhD, Computer Science, 2016, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/95372
► We present a program-verification approach based on coinduction, which makes it feasible to verify programs given an operational semantics of a programming language, without constructing…
(more)
▼ We present a program-
verification approach based on coinduction, which makes it feasible to verify programs given an operational semantics of a programming language, without constructing intermediates like axiomatic semantics or
verification-condition generators. Specifications can be written using any state predicates.
The key observations are that being able to define the correctness of a style of program specification as a greatest fixpoint means coinduction can be used to conclude that a specification holds, and that the number of cases that need to be enumerated to have a coinductively provable specification can be reduced to a feasible number by using a generalized coinduction principle (based on notions of ``coinduction up to'' developed for proving bisimulation) instead of the simplest statement of coinduction.
We implement our approach in Coq, producing a certifying language-independent
verification framework. The soundness of the system is based on a single module proving the necessary coinduction theorem, which is imported unchanged to prove programs in any language.
We demonstrate the power of this approach by verifying algorithms as complicated as Schorr-Waite graph marking, and the flexibility by instantiating it for language definitions covering several paradigms, and in several styles of semantics.
We also demonstrate a comfortable level of proof automation for several languages and domains, using a common overall heuristic strategy instantiated with customized subroutines. Manual assistance is also smoothly integrated where automation is not completely successful.
Advisors/Committee Members: Rosu, Grigore (advisor), Rosu, Grigore (Committee Chair), Gunter, Elsa L (committee member), Meseguer, José (committee member), Chlipala, Adam (committee member).
Subjects/Keywords: Program Verification; Coinduction; Operational Semantics; Formal Methods; Software Verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Moore, B. M. (2016). Coinductive program verification. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/95372
Chicago Manual of Style (16th Edition):
Moore, Brandon Michael. “Coinductive program verification.” 2016. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed March 07, 2021.
http://hdl.handle.net/2142/95372.
MLA Handbook (7th Edition):
Moore, Brandon Michael. “Coinductive program verification.” 2016. Web. 07 Mar 2021.
Vancouver:
Moore BM. Coinductive program verification. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/2142/95372.
Council of Science Editors:
Moore BM. Coinductive program verification. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2016. Available from: http://hdl.handle.net/2142/95372
12.
Oliveira, João Paulo dos Santos.
Rabbit: A novel approach to find data-races during state-space exploration
.
Degree: 2012, Universidade Federal de Pernambuco
URL: http://repositorio.ufpe.br/handle/123456789/10891
► Data-races are an important kind of error in concurrent shared-memory programs. Software model checking is a popular approach to find them. This research proposes a…
(more)
▼ Data-races are an important kind of error in concurrent shared-memory programs.
Software model
checking is a popular approach to find them. This research proposes a novel approach to find races
that complements model-checking by efficiently reporting precise warnings during state-space
exploration (SSE): Rabbit. It uses information obtained across different paths explored during SSE
to predict likely racy memory accesses. We evaluated Rabbit on 33 different scenarios of race,
involving a total of 21 distinct application subjects of various sources and sizes. Results indicate
that Rabbit reports race warnings very soon compared to the time the model checker detects the
race (for 84.8% of the cases it reports a true warning of race in <5s) and that the warnings it reports
include very few false alarms. We also observed that the model checker finds the actual race
quickly when it uses a guided-search that builds on Rabbit’s output (for 74.2% of the cases it
reports the race in <20s).
Advisors/Committee Members: Lima Filho, Fernando José Castor de (advisor), d’Amorim, Marcelo Bezerra (advisor).
Subjects/Keywords: Concorrency;
Software Verification;
Model Checking;
Race conditions
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Oliveira, J. P. d. S. (2012). Rabbit: A novel approach to find data-races during state-space exploration
. (Thesis). Universidade Federal de Pernambuco. Retrieved from http://repositorio.ufpe.br/handle/123456789/10891
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Oliveira, João Paulo dos Santos. “Rabbit: A novel approach to find data-races during state-space exploration
.” 2012. Thesis, Universidade Federal de Pernambuco. Accessed March 07, 2021.
http://repositorio.ufpe.br/handle/123456789/10891.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Oliveira, João Paulo dos Santos. “Rabbit: A novel approach to find data-races during state-space exploration
.” 2012. Web. 07 Mar 2021.
Vancouver:
Oliveira JPdS. Rabbit: A novel approach to find data-races during state-space exploration
. [Internet] [Thesis]. Universidade Federal de Pernambuco; 2012. [cited 2021 Mar 07].
Available from: http://repositorio.ufpe.br/handle/123456789/10891.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Oliveira JPdS. Rabbit: A novel approach to find data-races during state-space exploration
. [Thesis]. Universidade Federal de Pernambuco; 2012. Available from: http://repositorio.ufpe.br/handle/123456789/10891
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Utah
13.
Pagariya, Rohit.
Direct equivalence testing of embedded software.
Degree: MS, School of Computing, 2011, University of Utah
URL: http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/48/rec/740
► Direct equivalence testing is a framework for detecting errors in C compilers and application programs that exploits the fact that program semantics should be preserved…
(more)
▼ Direct equivalence testing is a framework for detecting errors in C compilers and application programs that exploits the fact that program semantics should be preserved during the compilation process. Binaries generated from the same piece of code should remain equivalent irrespective of the compiler, or compiler optimizations, used. Compiler errors as well as program errors such as out of bounds memory access, stack over ow, and use of uninitialized local variables cause nonequivalence in the generated binaries. Direct equivalence testing has detected previously unknown errors in real world embedded software like TinyOS and in di fferent compilers like msp430-gcc and llvm-msp430.
Subjects/Keywords: Compiler testing; Embedded software; Equivalence testing; Verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Pagariya, R. (2011). Direct equivalence testing of embedded software. (Masters Thesis). University of Utah. Retrieved from http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/48/rec/740
Chicago Manual of Style (16th Edition):
Pagariya, Rohit. “Direct equivalence testing of embedded software.” 2011. Masters Thesis, University of Utah. Accessed March 07, 2021.
http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/48/rec/740.
MLA Handbook (7th Edition):
Pagariya, Rohit. “Direct equivalence testing of embedded software.” 2011. Web. 07 Mar 2021.
Vancouver:
Pagariya R. Direct equivalence testing of embedded software. [Internet] [Masters thesis]. University of Utah; 2011. [cited 2021 Mar 07].
Available from: http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/48/rec/740.
Council of Science Editors:
Pagariya R. Direct equivalence testing of embedded software. [Masters Thesis]. University of Utah; 2011. Available from: http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/48/rec/740

Bond University
14.
Larkin, James.
A flexible framework for leveraging verification tools to enhance the verification technologies available for policy enforcement.
Degree: 2009, Bond University
URL: https://epublications.bond.edu.au/theses/12
► Program verification is vital as more and more users are creating, downloading and executing foreign computer programs. Software verification tools provide a means for determining…
(more)
▼ Program verification is vital as more and more users are creating, downloading and executing foreign computer programs. Software verification tools provide a means for determining if a program adheres to a user’s security requirements, or security policy. There are many verification tools that exist for checking different types of policies on different types of programs. Currently however, there is no verification tool capable of determining if all types of programs satisfy all types of policies.
This thesis describes a framework for supporting multiple verification tools to determine program satisfaction. A user’s security requirements are represented at multiple levels of abstraction as Intermediate Execution Environments. Using a sequence of configurations, a user’s security requirements are transformed from the abstract level to the tool level, possibly for multiple verification tools. Using a number of case studies, the validity of the framework is shown.
Subjects/Keywords: Computer software Verification.; Computer Science (0984)
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Larkin, J. (2009). A flexible framework for leveraging verification tools to enhance the verification technologies available for policy enforcement. (Thesis). Bond University. Retrieved from https://epublications.bond.edu.au/theses/12
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Larkin, James. “A flexible framework for leveraging verification tools to enhance the verification technologies available for policy enforcement.” 2009. Thesis, Bond University. Accessed March 07, 2021.
https://epublications.bond.edu.au/theses/12.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Larkin, James. “A flexible framework for leveraging verification tools to enhance the verification technologies available for policy enforcement.” 2009. Web. 07 Mar 2021.
Vancouver:
Larkin J. A flexible framework for leveraging verification tools to enhance the verification technologies available for policy enforcement. [Internet] [Thesis]. Bond University; 2009. [cited 2021 Mar 07].
Available from: https://epublications.bond.edu.au/theses/12.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Larkin J. A flexible framework for leveraging verification tools to enhance the verification technologies available for policy enforcement. [Thesis]. Bond University; 2009. Available from: https://epublications.bond.edu.au/theses/12
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Newcastle
15.
Hogavanaghatta Kumaraswamy, Jnanamurthy.
Model-driven engineering to enhance the reliability of software development by verifying system properties and detecting clones.
Degree: PhD, 2019, University of Newcastle
URL: http://hdl.handle.net/1959.13/1397849
► Research Doctorate - Doctor of Philosophy (PhD)
Computer software is used in almost every facet of day-to-day life, such as air conditioners, refrigerators, washing machines,…
(more)
▼ Research Doctorate - Doctor of Philosophy (PhD)
Computer software is used in almost every facet of day-to-day life, such as air conditioners, refrigerators, washing machines, transport systems, medical systems, and banking. This dependency gives rise to advantages, such as reductions in manual work and improvements in the quality of work. It is essential to ensure that the software that is used in the above-mentioned applications is correct, to avoid catastrophic failures. The aim of this research is twofold, as outlined below. Firstly, formal methods are used to ensure that the software specification for a system is accurate before implementation of that system. The use of formal methods in model-driven development requires a complete transformation of model-driven engineering (MDE) models to a formal model, and also a need to state specifications for verification after the transformation. There are several techniques to transform MDE models into formal models, but the model transformation is not enough to verify the system unless the specification is stated at the model level. Otherwise, the model designer has to know a formal specification language to specify the properties for verification. Additionally, existing modelling frameworks consume additional time during manual inputting of formal specifications to formal verification tools. Furthermore, existing methods achieve less in accuracy (percentage of specifications specified against required specifications during verification) and may require additional documentation of specifications due to, a designer may forget to include all required specifications in the verification tools for verification due to the larger application.To solve this problem, we use domain-specific modelling techniques to design a modelling framework that allows the specification of formal properties at the model level. The model and specifications are then extracted and transformed into a formal model and formal specifications for verification. The designed modelling framework also provides semantics to derive test cases. The framework performs well in coverage criteria during model-based development, and reduces the time and cost of verification by enabling automation. Secondly, software program classification plays a vital role in the identification of similar functionality, which can be considered as software clones, for re-usability, code optimisation and to avoid bug traversal. There are many existing methods for identifying software clones; however, these methods focus on structure-based analysis to detect clones at the code level. We consider the clone detection process at two levels: model and code (considering MDE models and IEC 61131-3 programs). Furthermore, we present a novel approach to detecting clones at model level using semantic-based analysis. Our method is based on model checking that involves mathematical analysis. Our process is tested with control-flow-based models and yields good results in the detection of model clones. To identify clones in IEC 61131-3…
Advisors/Committee Members: University of Newcastle. Faculty of Engineering & Built Environment, School of Electrical Engineering and Computing.
Subjects/Keywords: formal verification; model-driven engineering; software clones
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hogavanaghatta Kumaraswamy, J. (2019). Model-driven engineering to enhance the reliability of software development by verifying system properties and detecting clones. (Doctoral Dissertation). University of Newcastle. Retrieved from http://hdl.handle.net/1959.13/1397849
Chicago Manual of Style (16th Edition):
Hogavanaghatta Kumaraswamy, Jnanamurthy. “Model-driven engineering to enhance the reliability of software development by verifying system properties and detecting clones.” 2019. Doctoral Dissertation, University of Newcastle. Accessed March 07, 2021.
http://hdl.handle.net/1959.13/1397849.
MLA Handbook (7th Edition):
Hogavanaghatta Kumaraswamy, Jnanamurthy. “Model-driven engineering to enhance the reliability of software development by verifying system properties and detecting clones.” 2019. Web. 07 Mar 2021.
Vancouver:
Hogavanaghatta Kumaraswamy J. Model-driven engineering to enhance the reliability of software development by verifying system properties and detecting clones. [Internet] [Doctoral dissertation]. University of Newcastle; 2019. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1959.13/1397849.
Council of Science Editors:
Hogavanaghatta Kumaraswamy J. Model-driven engineering to enhance the reliability of software development by verifying system properties and detecting clones. [Doctoral Dissertation]. University of Newcastle; 2019. Available from: http://hdl.handle.net/1959.13/1397849

Boston University
16.
Skowyra, Richard William.
Verificare: a platform for composable verification with application to SDN-Enabled systems.
Degree: PhD, Computer Science, 2014, Boston University
URL: http://hdl.handle.net/2144/15116
► Software-Defined Networking (SDN) has become increasing prevalent in both the academic and industrial communities. A new class of system built on SDNs, which we refer…
(more)
▼ Software-Defined Networking (SDN) has become increasing prevalent
in both the academic and industrial communities. A new class of system built on
SDNs, which we refer to as SDN-Enabled, provide programmatic interfaces between
the SDN controller and the larger distributed system. Existing tools for SDN
verification and analysis are insufficiently expressive to capture
this composition of a network and a larger distributed system. Generic
verification systems are an infeasible solution, due to their monolithic
approach to modeling and rapid state-space explosion.
In this thesis we present a new compositional approach to system modeling and
verification that is particularly appropriate for SDN-Enabled systems.
Compositional models may have sub-components (such as switches and
end-hosts) modified, added, or removed with only minimal, isolated changes.
Furthermore, invariants may be defined over the composed system that restrict
its behavior, allowing assumptions to be added or removed and for components to
be abstracted away into the service guarantee that they provide (such as
guaranteed packet arrival). Finally, compositional modeling can minimize the
size of the state space to be verified by taking advantage of known model
structure.
We also present the Verificare platform, a tool chain for building
compositional models in our modeling language and automatically compiling them
to multiple off-the-shelf verification tools. The compiler outputs a minimal,
calculus-oblivious formalism, which is accessed by plugins via a translation
API. This enables a wide variety of requirements to be
verified. As new tools become available, the translator can easily be extended
with plugins to support them.
Subjects/Keywords: Computer science; Verification; Software-defined networking
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Skowyra, R. W. (2014). Verificare: a platform for composable verification with application to SDN-Enabled systems. (Doctoral Dissertation). Boston University. Retrieved from http://hdl.handle.net/2144/15116
Chicago Manual of Style (16th Edition):
Skowyra, Richard William. “Verificare: a platform for composable verification with application to SDN-Enabled systems.” 2014. Doctoral Dissertation, Boston University. Accessed March 07, 2021.
http://hdl.handle.net/2144/15116.
MLA Handbook (7th Edition):
Skowyra, Richard William. “Verificare: a platform for composable verification with application to SDN-Enabled systems.” 2014. Web. 07 Mar 2021.
Vancouver:
Skowyra RW. Verificare: a platform for composable verification with application to SDN-Enabled systems. [Internet] [Doctoral dissertation]. Boston University; 2014. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/2144/15116.
Council of Science Editors:
Skowyra RW. Verificare: a platform for composable verification with application to SDN-Enabled systems. [Doctoral Dissertation]. Boston University; 2014. Available from: http://hdl.handle.net/2144/15116

Hong Kong University of Science and Technology
17.
Lee, Hongjoo.
DeepLink : deep-learning word semantics to link software artifacts.
Degree: 2015, Hong Kong University of Science and Technology
URL: http://repository.ust.hk/ir/Record/1783.1-76839
;
https://doi.org/10.14711/thesis-b1514720
;
http://repository.ust.hk/ir/bitstream/1783.1-76839/1/th_redirect.html
► It is widely understood that developers’ language and bug reporters’ language are different, and the differences limit traceability between commit logs and bug reports. However,…
(more)
▼ It is widely understood that developers’ language and bug reporters’ language are different, and the differences limit traceability between commit logs and bug reports. However, few studies have uncovered the differences and tried to overcome the challenges they present. This paper investigates and deals with these issues. First, I clarify the textual difference and lexical relations between bug reports and commit logs by projecting words into context space with deep learning techniques. I also clarify some limitations of the conventional textual similarity measures between bug reports and commit logs on VSM owing to the textual differences. Second, I propose a novel approach, DeepLink, which automatically analyzes textual information and precisely recovers traceability between commit logs and their corresponding bug reports. Lastly, I evaluate the performance of DeepLink on 10 large open-source projects. The experimental results show that DeepLink outperforms conventional techniques by 17% in F-score on average.
Subjects/Keywords: Computer software
; Verification
; Mathematical models
; Quality control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lee, H. (2015). DeepLink : deep-learning word semantics to link software artifacts. (Thesis). Hong Kong University of Science and Technology. Retrieved from http://repository.ust.hk/ir/Record/1783.1-76839 ; https://doi.org/10.14711/thesis-b1514720 ; http://repository.ust.hk/ir/bitstream/1783.1-76839/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Lee, Hongjoo. “DeepLink : deep-learning word semantics to link software artifacts.” 2015. Thesis, Hong Kong University of Science and Technology. Accessed March 07, 2021.
http://repository.ust.hk/ir/Record/1783.1-76839 ; https://doi.org/10.14711/thesis-b1514720 ; http://repository.ust.hk/ir/bitstream/1783.1-76839/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Lee, Hongjoo. “DeepLink : deep-learning word semantics to link software artifacts.” 2015. Web. 07 Mar 2021.
Vancouver:
Lee H. DeepLink : deep-learning word semantics to link software artifacts. [Internet] [Thesis]. Hong Kong University of Science and Technology; 2015. [cited 2021 Mar 07].
Available from: http://repository.ust.hk/ir/Record/1783.1-76839 ; https://doi.org/10.14711/thesis-b1514720 ; http://repository.ust.hk/ir/bitstream/1783.1-76839/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Lee H. DeepLink : deep-learning word semantics to link software artifacts. [Thesis]. Hong Kong University of Science and Technology; 2015. Available from: http://repository.ust.hk/ir/Record/1783.1-76839 ; https://doi.org/10.14711/thesis-b1514720 ; http://repository.ust.hk/ir/bitstream/1783.1-76839/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Hong Kong University of Science and Technology
18.
Poon, Chi Yin.
A study of eliminating irrelevant access patterns for fault localization in concurrent programs.
Degree: 2012, Hong Kong University of Science and Technology
URL: http://repository.ust.hk/ir/Record/1783.1-7765
;
https://doi.org/10.14711/thesis-b1198616
;
http://repository.ust.hk/ir/bitstream/1783.1-7765/1/th_redirect.html
► Fault localization is usually the most time-consuming step when debugging software bugs in general and concurrency bugs in particular. Automatic fault localization identifies faulty code…
(more)
▼ Fault localization is usually the most time-consuming step when debugging software bugs in general and concurrency bugs in particular. Automatic fault localization identifies faulty code units as those whose coverage has a high statistical correlation with the occurrence of program failure. This idea has been extended to identify suspicious access patterns in concurrent programs recently, with the initial success being reported on small-scale programs. However, our empirical study found that this technique is less effective at large-scale programs because it reports too many irrelevant access patterns which drown out the real problematic ones that indicate the concurrency bugs. This thesis proposes a technique to addressing this issue. Our key observation is that concurrency bugs in the program usually involve only some parts of the program. By using a novel idea to isolate these parts and abstract away the others, we restrict the scope of analysis so as to eliminate irrelevant access patterns that occur outside. In this thesis, we report our technique which has been implemented as a debugging tool for Java. We have also conducted experiments that compare our work with the state-of-the-art technique in term of fault localization accuracy for several bugs on a real world program, Apache Derby. Overall, our technique can save more than 85% of code examination effort on the subject program.
Subjects/Keywords: Debugging in computer science
; Computer software – Verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Poon, C. Y. (2012). A study of eliminating irrelevant access patterns for fault localization in concurrent programs. (Thesis). Hong Kong University of Science and Technology. Retrieved from http://repository.ust.hk/ir/Record/1783.1-7765 ; https://doi.org/10.14711/thesis-b1198616 ; http://repository.ust.hk/ir/bitstream/1783.1-7765/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Poon, Chi Yin. “A study of eliminating irrelevant access patterns for fault localization in concurrent programs.” 2012. Thesis, Hong Kong University of Science and Technology. Accessed March 07, 2021.
http://repository.ust.hk/ir/Record/1783.1-7765 ; https://doi.org/10.14711/thesis-b1198616 ; http://repository.ust.hk/ir/bitstream/1783.1-7765/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Poon, Chi Yin. “A study of eliminating irrelevant access patterns for fault localization in concurrent programs.” 2012. Web. 07 Mar 2021.
Vancouver:
Poon CY. A study of eliminating irrelevant access patterns for fault localization in concurrent programs. [Internet] [Thesis]. Hong Kong University of Science and Technology; 2012. [cited 2021 Mar 07].
Available from: http://repository.ust.hk/ir/Record/1783.1-7765 ; https://doi.org/10.14711/thesis-b1198616 ; http://repository.ust.hk/ir/bitstream/1783.1-7765/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Poon CY. A study of eliminating irrelevant access patterns for fault localization in concurrent programs. [Thesis]. Hong Kong University of Science and Technology; 2012. Available from: http://repository.ust.hk/ir/Record/1783.1-7765 ; https://doi.org/10.14711/thesis-b1198616 ; http://repository.ust.hk/ir/bitstream/1783.1-7765/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
19.
Healy, Andrew.
Predicting SMT solver performance
for software verification.
Degree: 2016, RIAN
URL: http://eprints.maynoothuniversity.ie/8770/
► The approach Why3 takes to interfacing with a wide variety of interactive and automatic theorem provers works well: it is designed to overcome limitations on…
(more)
▼ The approach Why3 takes to interfacing with a wide variety of interactive
and automatic theorem provers works well: it is designed to overcome
limitations on what can be proved by a system which relies on a single
tightly-integrated solver. In common with other systems, however, the degree
to which proof obligations (or “goals”) are proved depends as much on
the SMT solver as the properties of the goal itself. In this work, we present a
method to use syntactic analysis to characterise goals and predict the most
appropriate solver via machine-learning techniques.
Combining solvers in this way - a portfolio-solving approach - maximises
the number of goals which can be proved. The driver-based architecture of
Why3 presents a unique opportunity to use a portfolio of SMT solvers for
software verification. The intelligent scheduling of solvers minimises the
time it takes to prove these goals by avoiding solvers which return Timeout
and Unknown responses. We assess the suitability of a number of machinelearning
algorithms for this scheduling task.
The performance of our tool Where4 is evaluated on a dataset of proof
obligations. We compare Where4 to a range of SMT solvers and theoretical
scheduling strategies. We find that Where4 can out-perform individual
solvers by proving a greater number of goals in a shorter average time.
Furthermore, Where4 can integrate into a Why3 user’s normal workflow -
simplifying and automating the non-expert use of SMT solvers for software
verification.
Subjects/Keywords: Predicting; SMT solver performance; software verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Healy, A. (2016). Predicting SMT solver performance
for software verification. (Thesis). RIAN. Retrieved from http://eprints.maynoothuniversity.ie/8770/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Healy, Andrew. “Predicting SMT solver performance
for software verification.” 2016. Thesis, RIAN. Accessed March 07, 2021.
http://eprints.maynoothuniversity.ie/8770/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Healy, Andrew. “Predicting SMT solver performance
for software verification.” 2016. Web. 07 Mar 2021.
Vancouver:
Healy A. Predicting SMT solver performance
for software verification. [Internet] [Thesis]. RIAN; 2016. [cited 2021 Mar 07].
Available from: http://eprints.maynoothuniversity.ie/8770/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Healy A. Predicting SMT solver performance
for software verification. [Thesis]. RIAN; 2016. Available from: http://eprints.maynoothuniversity.ie/8770/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
20.
Healy, Andrew.
Predicting SMT solver performance
for software verification.
Degree: 2016, RIAN
URL: http://mural.maynoothuniversity.ie/8770/
► The approach Why3 takes to interfacing with a wide variety of interactive and automatic theorem provers works well: it is designed to overcome limitations on…
(more)
▼ The approach Why3 takes to interfacing with a wide variety of interactive
and automatic theorem provers works well: it is designed to overcome
limitations on what can be proved by a system which relies on a single
tightly-integrated solver. In common with other systems, however, the degree
to which proof obligations (or “goals”) are proved depends as much on
the SMT solver as the properties of the goal itself. In this work, we present a
method to use syntactic analysis to characterise goals and predict the most
appropriate solver via machine-learning techniques.
Combining solvers in this way - a portfolio-solving approach - maximises
the number of goals which can be proved. The driver-based architecture of
Why3 presents a unique opportunity to use a portfolio of SMT solvers for
software verification. The intelligent scheduling of solvers minimises the
time it takes to prove these goals by avoiding solvers which return Timeout
and Unknown responses. We assess the suitability of a number of machinelearning
algorithms for this scheduling task.
The performance of our tool Where4 is evaluated on a dataset of proof
obligations. We compare Where4 to a range of SMT solvers and theoretical
scheduling strategies. We find that Where4 can out-perform individual
solvers by proving a greater number of goals in a shorter average time.
Furthermore, Where4 can integrate into a Why3 user’s normal workflow -
simplifying and automating the non-expert use of SMT solvers for software
verification.
Subjects/Keywords: Predicting; SMT solver performance; software verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Healy, A. (2016). Predicting SMT solver performance
for software verification. (Thesis). RIAN. Retrieved from http://mural.maynoothuniversity.ie/8770/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Healy, Andrew. “Predicting SMT solver performance
for software verification.” 2016. Thesis, RIAN. Accessed March 07, 2021.
http://mural.maynoothuniversity.ie/8770/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Healy, Andrew. “Predicting SMT solver performance
for software verification.” 2016. Web. 07 Mar 2021.
Vancouver:
Healy A. Predicting SMT solver performance
for software verification. [Internet] [Thesis]. RIAN; 2016. [cited 2021 Mar 07].
Available from: http://mural.maynoothuniversity.ie/8770/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Healy A. Predicting SMT solver performance
for software verification. [Thesis]. RIAN; 2016. Available from: http://mural.maynoothuniversity.ie/8770/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Limerick
21.
Fogarty, Padraig Justin.
Utilizing multicore architectures to enhance software verification in real-time embedded systems.
Degree: 2013, University of Limerick
URL: http://hdl.handle.net/10344/3602
► peer-reviewed
The hypothesis of this research is that new techniques are required to facilitate software verification on the highly-integrated, but resource constrained, real-time embedded systems;…
(more)
▼ peer-reviewed
The hypothesis of this research is that new techniques are required to facilitate software
verification on the highly-integrated, but resource constrained, real-time embedded systems;
which are widely used in safety-critical applications. Software verification is an essential but
expensive undertaking which often consumes as much or more resources than design
activities; this is particularly the case in embedded systems that require functional safety. This
research explores the existing techniques for software verification on these systems and the
verification challenges posed by modern highly-integrated devices. The author then proposes
a novel target-level verification approach which addresses some of these challenges.
Advances in semiconductor manufacturing processes have fuelled the relentless shrinking
of IC design geometries. This has dramatically reduced the area required for each functional
block, reduced costs, and allowed more complex circuits to be realised; which has led to the
System-on-Chip (SoC) designs which now include multiple processors within a single die.
Undoubtedly many benefits result from this increased integration, but one significant
drawback is the loss of access to the many signals indicating the internal operational state.
Visibility of these signals is essential for many embedded software verification purposes.
In parallel with increasing SoC complexity, verification technology has transformed from
using full in-circuit emulation, to bond-out devices, to on-chip instrumentation (OCI), each
providing less visibility to the execution state of the processor. A key benefit of OCI
approaches is the associated reduced physical interface requirements; unfortunately this also
limits the real-time data that can be captured and transferred to external analysis tools. The
author proposes the alternative of using this OCI in conjunction with a co-processor to
perform monitoring and verification tasks on-chip; thus overcoming the interface limitations
and enhancing visibility.
The experimental platform used to explore the feasibility of using a co-processor and OCI
for software verification activities is described; and several case studies are examined. The
results demonstrate that this approach does offer a means of addressing several software
verification challenges and provides some unique capabilities, but also has some limitations.
These benefits and limitations are discussed and suggestions for future work to advance this
research topic are provided.
Advisors/Committee Members: Heffernan, Donal, IRCSET, International Centre for Graduate Education in Micro and Nano Engineering (ICGEE).
Subjects/Keywords: software verification; real-time embedded systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Fogarty, P. J. (2013). Utilizing multicore architectures to enhance software verification in real-time embedded systems. (Thesis). University of Limerick. Retrieved from http://hdl.handle.net/10344/3602
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Fogarty, Padraig Justin. “Utilizing multicore architectures to enhance software verification in real-time embedded systems.” 2013. Thesis, University of Limerick. Accessed March 07, 2021.
http://hdl.handle.net/10344/3602.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Fogarty, Padraig Justin. “Utilizing multicore architectures to enhance software verification in real-time embedded systems.” 2013. Web. 07 Mar 2021.
Vancouver:
Fogarty PJ. Utilizing multicore architectures to enhance software verification in real-time embedded systems. [Internet] [Thesis]. University of Limerick; 2013. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10344/3602.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Fogarty PJ. Utilizing multicore architectures to enhance software verification in real-time embedded systems. [Thesis]. University of Limerick; 2013. Available from: http://hdl.handle.net/10344/3602
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Oklahoma
22.
Ralston, Ryan.
Translating Clojure to ACL2 for Verification.
Degree: PhD, 2016, University of Oklahoma
URL: http://hdl.handle.net/11244/42982
► Software spends a significant portion of its life-cycle in the maintenance phase and over 20% of the maintenance effort is fixing defects. Formal methods, including…
(more)
▼ Software spends a significant portion of its life-cycle in the maintenance phase and over 20% of the maintenance effort is fixing defects. Formal methods, including
verification, can reduce the number of defects in
software and lower corrective maintenance, but their industrial adoption has been underwhelming. A significant barrier to adoption is the overhead of converting imperative programming languages, which are common in industry, into the declarative programming languages that are used by formal methods tools. In comparison, the
verification of
software written in declarative programming languages is much easier because the conversion into a formal methods tool is easier. The growing popularity of declarative programming – evident from the rise of multi-paradigm languages such as Javascript, Ruby, and Scala – affords us the opportunity to verify the correctness of
software more easily.
Clojure is a declarative language released in 2007 that compiles to bytecode that executes on the Java Virtual Machine (JVM). Despite being a newer, declarative programming language, several companies have already used it to develop commercial products. Clojure shares a Lisp syntax with ACL2, an interactive theorem prover that is used to verify the correctness of
software. Since both languages are based on Lisp, code written in either Clojure or ACL2 is easily converted to the other. Therefore, Clojure can conceivably be verified by ACL2 with limited overhead assuming that the underlying behavior of Clojure code matches that of ACL2. ACL2 has been previously used to reason about Java programs through the use of formal models of the JVM. Since Clojure compiles to JVM bytecode, a similar approach is taken in this dissertation to verify the underlying implementation of Clojure.
The research presented in this dissertation advances techniques to verify Clojure code in ACL2. Clojure and ACL2 are declarative, but they are specifically functional programming languages so the research focuses on two important concepts in functional programming and
verification: arbitrary-precision numbers ("bignums") and lists. For bignums, the correctness of a model of addition is verified that addresses issues that arise from the unique representation of bignums in Clojure. Lists, in Clojure, are implemented as a type of sequence. This dissertation demonstrates an abstraction that equates Clojure sequences to ACL2 lists. In support of the research, an existing ACL2 model of the JVM is modified to address specific aspects of compiled Clojure code and the new model is used to verify the correctness of core Clojure functions with respect to corresponding ACL2 functions. The results support the ideas that ACL2 can be used to reason about Clojure code and that formal methods can be integrated more easily in industrial
software development when the implementation corresponds semantically to the
verification model.
Advisors/Committee Members: Page, Rex (advisor), Hougen, Dean (advisor), Miller, David (committee member), Jensen, Matthew (committee member), Weaver, Christopher (committee member).
Subjects/Keywords: Software Verification; Formal Methods; Theorem Proving
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ralston, R. (2016). Translating Clojure to ACL2 for Verification. (Doctoral Dissertation). University of Oklahoma. Retrieved from http://hdl.handle.net/11244/42982
Chicago Manual of Style (16th Edition):
Ralston, Ryan. “Translating Clojure to ACL2 for Verification.” 2016. Doctoral Dissertation, University of Oklahoma. Accessed March 07, 2021.
http://hdl.handle.net/11244/42982.
MLA Handbook (7th Edition):
Ralston, Ryan. “Translating Clojure to ACL2 for Verification.” 2016. Web. 07 Mar 2021.
Vancouver:
Ralston R. Translating Clojure to ACL2 for Verification. [Internet] [Doctoral dissertation]. University of Oklahoma; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/11244/42982.
Council of Science Editors:
Ralston R. Translating Clojure to ACL2 for Verification. [Doctoral Dissertation]. University of Oklahoma; 2016. Available from: http://hdl.handle.net/11244/42982

The Ohio State University
23.
Zaccai, Diego Sebastian.
A Balanced Verification Effort for the Java Language.
Degree: PhD, Computer Science and Engineering, 2016, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1461243619
► Current tools used to verify software in languages with reference semantics, such as Java, are hampered by the possibility of aliases. Existing approaches to addressing…
(more)
▼ Current tools used to verify
software in languages
with reference semantics, such as Java, are hampered by the
possibility of aliases. Existing approaches to addressing this
long-standing
verification problem try not to sacrifice modularity
because modular reasoning is what makes
verification tractable. To
achieve this, these approaches treat the value of a reference
variable as a memory address in the heap. A serious problem with
this decision is that it severely limits the usefulness of generic
collections because they must be specified as collections of
references, and components of this kind are fundamentally flawed in
design and implementation. The limitations become clear when
attempting to verify clients of generic collections.The first step
in rectifying the situation is to redefine the "value" of a
reference variable in terms of the abstract value of the object it
references. A careful analysis of what the "value" of a reference
variable might mean leads inevitably to this conclusion, which is
consistent with the denotation of a variable in languages with
value semantics, such as RESOLVE.
Verification in languages with
value semantics is straightforward compared to
verification in
languages with reference semantics precisely because of the lack of
concern with heap properties. However, there is still a nagging
problem: aliasing can occur in legal programs in languages with
reference semantics, such as Java, and it must be handled when it
does occur. The crux of the issue is not in-your-face assignment
statements that copy references but rather aliasing arising within
(hidden) method bodies. The reason is that the effects of calls to
these methods in client code must be summarized in their
specifications in order to preserve modularity.So, the second step
is the introduction of a discipline restricting what a client can
do with a reference that is aliased within a method. The discipline
advertises the creation of such aliases in method specifications
and prevents a client from engaging in behavior that would break
the abstractions of the objects being referenced, as this would
also prevent modular
verification. These restrictions allow code to
be specified in terms of the abstract values of objects instead of
treating the values of references as memory addresses in the heap.
Even though the discipline prevents some programming idioms, it
remains flexible enough to allow for most common programs,
including the use of iterators, without the need for workarounds.A
tool can verify that a program satisfies the provisions of this
discipline. Further, it can generate
verification conditions that
rely only on abstract object values to demonstrate a program's
correctness. These
verification conditions can be discharged by the
theorem-proving tools currently used to verify RESOLVE
programs.
Advisors/Committee Members: Weide, Bruce W. (Advisor).
Subjects/Keywords: Computer Science; Formal Methods; Software Verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zaccai, D. S. (2016). A Balanced Verification Effort for the Java Language. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1461243619
Chicago Manual of Style (16th Edition):
Zaccai, Diego Sebastian. “A Balanced Verification Effort for the Java Language.” 2016. Doctoral Dissertation, The Ohio State University. Accessed March 07, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1461243619.
MLA Handbook (7th Edition):
Zaccai, Diego Sebastian. “A Balanced Verification Effort for the Java Language.” 2016. Web. 07 Mar 2021.
Vancouver:
Zaccai DS. A Balanced Verification Effort for the Java Language. [Internet] [Doctoral dissertation]. The Ohio State University; 2016. [cited 2021 Mar 07].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1461243619.
Council of Science Editors:
Zaccai DS. A Balanced Verification Effort for the Java Language. [Doctoral Dissertation]. The Ohio State University; 2016. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1461243619
24.
Famelis, Michail.
Managing Design-Time Uncertainty in Software Models.
Degree: PhD, 2016, University of Toronto
URL: http://hdl.handle.net/1807/72997
► The concern for handling uncertainty in software pervades contemporary software engineering. A particular form of uncertainty is that which can exist at multiple stages of…
(more)
▼ The concern for handling uncertainty in
software pervades contemporary
software
engineering. A particular form of uncertainty is that which can exist at
multiple stages of the development process, where developers are uncertain about
the content of their artifacts. However, existing tools and methodologies do
not support working in the presence of design-time uncertainty, i.e., uncertainty
that developers have about the content of their artifacts in various stages of
the development process, therefore having to mentally keep track of multitude of
possible alternative designs. Because of this uncertainty, developers are
forced to either refrain from using their tools until uncertainty is resolved,
or to make provisional decisions and attempt to keep track of them in case they
prove premature and need to be undone. These options lead to either
under-utilization of resources or potentially costly re-engineering.
This thesis presents a way to avoid these pitfalls by managing uncertainty in a
systematic way. We propose to to work in the presence of uncertainty and to only
resolve it when enough information is available. Development can therefore
continue while avoiding premature design commitments.
In a pilot user study we found that, when asked to articulate design-time
uncertainty in a free-form modelling scenario, people tend to explicate it within
the
software artifact itself, staying close to the existing notation. This lead
us to adopt "partial models", a formalism for representing sets of possible
design alternatives while staying faithful to the underlying language. This
way, the problem of managing uncertainty in
software artifacts is turned
into a problem of doing management of
software artifacts that contain
uncertainty.
To manage partial models, we have thus leveraged several
software engineering
sub-disciplines to develop techniques for:
(a) articulating uncertainty,
(b) checking and enforcing properties, as well as generating appropriate user
feedback,
(c) applying transformations, and
(d) systematically making decisions, as new information becomes available.
The resulting partial model management framework utilizes novel abstraction
and automation approaches and enables a principled and systematic approach to
managing design-time uncertainty in the
software development process.
Advisors/Committee Members: Chechik, Marsha, Computer Science.
Subjects/Keywords: Model Transformation; Partial Models; Software Engineering; Software Modelling; Software Processes; Software Verification; 0984
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Famelis, M. (2016). Managing Design-Time Uncertainty in Software Models. (Doctoral Dissertation). University of Toronto. Retrieved from http://hdl.handle.net/1807/72997
Chicago Manual of Style (16th Edition):
Famelis, Michail. “Managing Design-Time Uncertainty in Software Models.” 2016. Doctoral Dissertation, University of Toronto. Accessed March 07, 2021.
http://hdl.handle.net/1807/72997.
MLA Handbook (7th Edition):
Famelis, Michail. “Managing Design-Time Uncertainty in Software Models.” 2016. Web. 07 Mar 2021.
Vancouver:
Famelis M. Managing Design-Time Uncertainty in Software Models. [Internet] [Doctoral dissertation]. University of Toronto; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/1807/72997.
Council of Science Editors:
Famelis M. Managing Design-Time Uncertainty in Software Models. [Doctoral Dissertation]. University of Toronto; 2016. Available from: http://hdl.handle.net/1807/72997

University of Oxford
25.
Kesseli, Pascal.
Semantic refactorings.
Degree: PhD, 2017, University of Oxford
URL: http://ora.ox.ac.uk/objects/uuid:0c74954e-dc83-463f-bcd4-519d98c3dcca
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.748687
► Refactorings are structured changes to existing software that leave its externally observable behaviour unchanged. The intent is to improve readability, performance or other non-behavioural properties…
(more)
▼ Refactorings are structured changes to existing software that leave its externally observable behaviour unchanged. The intent is to improve readability, performance or other non-behavioural properties of a program. Agile software engineering processes stress the importance of refactoring to keep program code extensible and maintainable. Despite their apparent benefits, manual refactorings are time-consuming and prone to introducing unintended side effects. Research efforts seek to support and automate refactoring tasks to overcome these limitations. Current research in automatic refactoring, as well as state-of-the-art automated refactoring tools, frequently rely on syntax-driven approaches. They focus on transformations which can be safely performed using only syntactic information about a program or act overly conservative when knowledge about program semantics is required. In this thesis we explore semantics-driven refactoring, which enables much more sophisticated refactoring schemata. Our semantics-driven refactorings rely on formal verification algorithms to reason over a program's behaviour, and we conjecture they are more precise and can handle more complex code scenarios than syntax-driven ones. For this purpose, we present and implement a program synthesis algorithm based on the CEGIS paradigm and demonstrate that it can be applied to a diverse set of applications. Our synthesiser relies on the bounded model checker CBMC as an oracle and is based on an earlier research prototype called Kalashnikov. We further define our Java Stream Theory (JST) which allows us to reason over a set of interesting semantic refactorings. Both solutions are combined into an automated semantic refactoring decision procedure, reasoning over program behaviours, and searching the space of possible refactorings using program synthesis. We provide experimental evidence to support our conjecture that semanticsdriven refactorings exceed syntax-driven approaches in precision and scope.
Subjects/Keywords: 005.3; Computer science; software synthesis; software verification; refactoring
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kesseli, P. (2017). Semantic refactorings. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:0c74954e-dc83-463f-bcd4-519d98c3dcca ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.748687
Chicago Manual of Style (16th Edition):
Kesseli, Pascal. “Semantic refactorings.” 2017. Doctoral Dissertation, University of Oxford. Accessed March 07, 2021.
http://ora.ox.ac.uk/objects/uuid:0c74954e-dc83-463f-bcd4-519d98c3dcca ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.748687.
MLA Handbook (7th Edition):
Kesseli, Pascal. “Semantic refactorings.” 2017. Web. 07 Mar 2021.
Vancouver:
Kesseli P. Semantic refactorings. [Internet] [Doctoral dissertation]. University of Oxford; 2017. [cited 2021 Mar 07].
Available from: http://ora.ox.ac.uk/objects/uuid:0c74954e-dc83-463f-bcd4-519d98c3dcca ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.748687.
Council of Science Editors:
Kesseli P. Semantic refactorings. [Doctoral Dissertation]. University of Oxford; 2017. Available from: http://ora.ox.ac.uk/objects/uuid:0c74954e-dc83-463f-bcd4-519d98c3dcca ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.748687
26.
Chunduri, Annapurna.
An Effective Verification Strategy for Testing Distributed Automotive Embedded Software Functions: A Case Study.
Degree: 2016, , Department of Software Engineering
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12805
► Context. The share and importance of software within automotive vehicles is growing steadily. Most functionalities in modern vehicles, especially safety related functions like advanced…
(more)
▼ Context. The share and importance of software within automotive vehicles is growing steadily. Most functionalities in modern vehicles, especially safety related functions like advanced emergency braking, are controlled by software. A complex and common phenomenon in today’s automotive vehicles is the distribution of such software functions across several Electronic Control Units (ECUs) and consequently across several ECU system software modules. As a result, integration testing of these distributed software functions has been found to be a challenge. The automotive industry neither has infinite resources, nor has the time to carry out exhaustive testing of these functions. On the other hand, the traditional approach of implementing an ad-hoc selection of test scenarios based on the tester’s experience, can lead to test gaps and test redundancies. Hence, there is a pressing need within the automotive industry for a feasible and effective verification strategy for testing distributed software functions. Objectives. Firstly, to identify the current approach used to test the distributed automotive embedded software functions in literature and in a case company. Secondly, propose and validate a feasible and effective verification strategy for testing the distributed software functions that would help improve test coverage while reducing test redundan- cies and test gaps. Methods. To accomplish the objectives, a case study was conducted at Scania CV AB, Södertälje, Sweden. One of the data collection methods was through conducting interviews of different employees involved in the software testing activities. Based on the research objectives, an interview questionnaire with open-ended and close-ended questions has been used. Apart from interviews, data from relevant ar- tifacts in databases and archived documents has been used to achieve data triangulation. Moreover, to further strengthen the validity of the results obtained, adequate literature support has been presented throughout. Towards the end, a verification strategy has been proposed and validated using existing historical data at Scania. Conclusions. The proposed verification strategy to test distributed automotive embedded software functions has given promising results by providing means to identify test gaps and test redundancies. It helps establish an effective and feasible approach to capture function test coverage information that helps enhance the effectiveness of integration testing of the distributed software functions.
Subjects/Keywords: Verification Strategy; Automotive Embedded Software; Test Coverage; Software Engineering; Programvaruteknik
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chunduri, A. (2016). An Effective Verification Strategy for Testing Distributed Automotive Embedded Software Functions: A Case Study. (Thesis). , Department of Software Engineering. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12805
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Chunduri, Annapurna. “An Effective Verification Strategy for Testing Distributed Automotive Embedded Software Functions: A Case Study.” 2016. Thesis, , Department of Software Engineering. Accessed March 07, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12805.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Chunduri, Annapurna. “An Effective Verification Strategy for Testing Distributed Automotive Embedded Software Functions: A Case Study.” 2016. Web. 07 Mar 2021.
Vancouver:
Chunduri A. An Effective Verification Strategy for Testing Distributed Automotive Embedded Software Functions: A Case Study. [Internet] [Thesis]. , Department of Software Engineering; 2016. [cited 2021 Mar 07].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12805.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Chunduri A. An Effective Verification Strategy for Testing Distributed Automotive Embedded Software Functions: A Case Study. [Thesis]. , Department of Software Engineering; 2016. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12805
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Virginia Tech
27.
Ibrar, Fahad.
Strip-Miner: Automatic Bug Detection in Large Software Code with Low False Positive Rate.
Degree: MS, Computer Science and Applications, 2020, Virginia Tech
URL: http://hdl.handle.net/10919/97934
► Every software code has bugs in it that can change its expected behavior. There have been a lot of efforts to automate the process of…
(more)
▼ Every
software code has bugs in it that can change its expected behavior. There have been a lot of efforts to automate the process of bug detection but most of the techniques proposed have a high rate of false alarms. Some of these techniques leverage the information available in
software code to extract programming patterns that can be used to find potential bugs. Although such an approach has proved to be fruitful for finding bugs but large number of false alarms makes it almost useless in
software development.
The elements present in a
software code have relationships among them formally known as dependencies and the process of finding them is known as dependency analysis. There is a technique known as market basket analysis used by large retailers to find association between items. It works by looking for combinations of items that occur together frequently in transactions. Similarly, in a
software code combinations of elements that occur together, can be used to find association between them. This technique is formally known as frequent itemset mining in the data mining domain. This work proposes an approach, named Strip- Miner, that combines dependency analysis with frequent itemset mining to reduce the rate of false alarms. We adopt a two phase approach 1)finding the potential bugs in code and 2)filtering the false alarms. In the first phase we extract code elements and dependencies among them and use frequent itemset mining to find programming patterns where a deviation from these patterns is considered as a potential bug. In the second phase, we use the extracted dependencies to build dependency chains between program elements present in a programming pattern and lack of such a chain is an indication of false alarm.
Our evaluation on a set of 7 benchmarks consisting of large
software code including version control systems, database management systems,
software security libraries and utility
software like media players shows that combining simple dependency analysis with frequent itemset mining can significantly decrease the rate of false alarms. Using our approach we are able to reduce the number of generated bugs by up to 99.9% with a false alarms rate of 65.19% and real bugs rate of 34.18% on average as compared to an earlier frequent itemset mining based approach "PR-Miner".
Advisors/Committee Members: Hicks, Matthew (committeechair), Servant Cortes, Francisco Javier (committee member), Butt, Ali (committee member).
Subjects/Keywords: Software Engineering; Software/Program Verification; Automatic Bug Detection
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ibrar, F. (2020). Strip-Miner: Automatic Bug Detection in Large Software Code with Low False Positive Rate. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/97934
Chicago Manual of Style (16th Edition):
Ibrar, Fahad. “Strip-Miner: Automatic Bug Detection in Large Software Code with Low False Positive Rate.” 2020. Masters Thesis, Virginia Tech. Accessed March 07, 2021.
http://hdl.handle.net/10919/97934.
MLA Handbook (7th Edition):
Ibrar, Fahad. “Strip-Miner: Automatic Bug Detection in Large Software Code with Low False Positive Rate.” 2020. Web. 07 Mar 2021.
Vancouver:
Ibrar F. Strip-Miner: Automatic Bug Detection in Large Software Code with Low False Positive Rate. [Internet] [Masters thesis]. Virginia Tech; 2020. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10919/97934.
Council of Science Editors:
Ibrar F. Strip-Miner: Automatic Bug Detection in Large Software Code with Low False Positive Rate. [Masters Thesis]. Virginia Tech; 2020. Available from: http://hdl.handle.net/10919/97934

UCLA
28.
Gao, Min.
Applications of Formal And Semi-formal Verification on Software Testing, High-level Synthesis And Energy Internet.
Degree: Electrical Engineering, 2018, UCLA
URL: http://www.escholarship.org/uc/item/6jj2b6jn
► With the increasing power of computers and advances in constraint solving technologies, formal and semi-formal verification have received great attentions on many applications. Formal verification…
(more)
▼ With the increasing power of computers and advances in constraint solving technologies, formal and semi-formal verification have received great attentions on many applications. Formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property. These verification techniques have wide range of applications in real life. This dissertation describes the applications of formal and semi-formal verification in four parts. The first part of the dissertation focuses on software testing. For software testing, symbolic/concolic testing reasons about data symbolically but enumerates program paths. The existing concolic technique enumerates paths sequentially, leading to poor branch coverage in limited time. We improve concolic testing by bounded model checking. During concolic testing, we identify program regions that can be encoded by BMC on the fly so that program paths within these regions are checked simultaneously. We have implemented the new algorithm on top of KLEE and called the new tool Llsplat. We have compared Llsplat with KLEE using 10 programs from the Windows NT Drivers Simplified and 88 programs from the GNU Coreutils benchmark sets. With 3600 second testing time for each program, Llsplat provides on average 13% relative branch coverage improvement on all 10 programs in the Windows drivers set, and on average 16% relative branch coverage improvement on 80 out of 88 programs in the GNU Coreutils set.The second part of the dissertation implements symbolic/concolic testing methods onto an embedded platform. With the more extensive use and of higher demand of the embedded systems, reliability of the embedded software becomes a critical issue. Thus it is important to design a test harness that can test embedded software on the real platform or hardware in the loop framework comprehensively and systematically. We present our design prototype Codecomb. Codecomb implements symbolic/concolic execution that is able to achieve high branch coverage to generated test cases. It mainly exploits client/server architecture to achieve the isolation of testing tools and program under test such that complex computing job is performed in the server side. Experimental results show that Codecomb can detect program deficiency automatically on the embedded platform, and precisely locate errors such as buffer overflow, memory leak in a running program.The third part of the dissertation applies formal and semi-methods to high-level synthesis (HLS) for VLSI. Verifying functional equivalence of high-level synthesis with formal methods ensures the correctness of the transformation flow. Current verification work widely uses static analysis such as model checking, while a pure dynamic execution flow is missing. In this part, we propose a functional verification flow for HLS utilizing symbolic execution on both C and Verilog directly. Specifically, on behavior C level we collect program traces via symbolicexecution. As for Verilog level,…
Subjects/Keywords: Electrical engineering; Energy internet; Formal verification; Functional verification; High-level Synthesis; Software testing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gao, M. (2018). Applications of Formal And Semi-formal Verification on Software Testing, High-level Synthesis And Energy Internet. (Thesis). UCLA. Retrieved from http://www.escholarship.org/uc/item/6jj2b6jn
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Gao, Min. “Applications of Formal And Semi-formal Verification on Software Testing, High-level Synthesis And Energy Internet.” 2018. Thesis, UCLA. Accessed March 07, 2021.
http://www.escholarship.org/uc/item/6jj2b6jn.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Gao, Min. “Applications of Formal And Semi-formal Verification on Software Testing, High-level Synthesis And Energy Internet.” 2018. Web. 07 Mar 2021.
Vancouver:
Gao M. Applications of Formal And Semi-formal Verification on Software Testing, High-level Synthesis And Energy Internet. [Internet] [Thesis]. UCLA; 2018. [cited 2021 Mar 07].
Available from: http://www.escholarship.org/uc/item/6jj2b6jn.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Gao M. Applications of Formal And Semi-formal Verification on Software Testing, High-level Synthesis And Energy Internet. [Thesis]. UCLA; 2018. Available from: http://www.escholarship.org/uc/item/6jj2b6jn
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Illinois – Chicago
29.
Bernasconi, Anna.
Building Deductive Proofs of LTL Properties for Iteratively Refined Systems.
Degree: 2016, University of Illinois – Chicago
URL: http://hdl.handle.net/10027/20884
► Modern software development processes are evolving from sequential to increasingly agile and incremental paradigms. Verification, unavoidable step of a correct software production, cannot get left…
(more)
▼ Modern
software development processes are evolving from sequential to increasingly agile and incremental paradigms.
Verification, unavoidable
step of a correct
software production, cannot get left behind by this quickly changing practice. Advances in
verification techniques
have been considerable in the past years, and feasibility has been achieved on always greater systems. Nevertheless, we believe that
verification and modern development processes are still not going at the same pace in terms of incrementality.
Classical
verification algorithms are applied when a complete specification of the model is available, and several development costs
and efforts have been already spent. Today more than ever, the description of a system changes continuously during the phase of analysis, asking
for periodical adjustments in its specifications. Various parts are often only sketched awaiting further enrichment, which is sometimes
delegated to third parties. The classical scenario is, therefore, not applicable anymore: it becomes essential to come up with light
iterative methods of
verification that can be applied also to incomplete models at each stage of the design and development phases, contributing more incisively to developers choices.
With particular focus on two main
verification techniques, model checking and deductive
verification, we study a way integrating them into
this incremental context. The idea is to supply each step of the design phase with a way to prove behaviors of incomplete systems or only
single components. Step-wise model checking can be augmented by a simple incremental deductive system generator that justifies why the
system meets some requested temporal specification. This kind of infrastructure can bring a useful contribution in cases and refinements where safety, starvation or liveness are critical, and, in general, guide the choices of the developer that faces different designs.
The main idea is to combine two approaches presented in literature: we would like to exploit a procedure of model checking that supports incompletely specified systems and we study a mechanism to build deductive proofs using information gathered during model checking. This thesis deals with the construction of these incremental deductive proofs of linear temporal logic properties in incomplete systems that are completed progressively when the system gets refined.
Advisors/Committee Members: Zuck, Lenore D. (advisor), Sistla, Prasad (committee member), Lanzi, Pier Luca (committee member).
Subjects/Keywords: formal software verification; model checking; deductive verification; requirements; incremental; deductive proof; deductive system
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bernasconi, A. (2016). Building Deductive Proofs of LTL Properties for Iteratively Refined Systems. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/20884
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Bernasconi, Anna. “Building Deductive Proofs of LTL Properties for Iteratively Refined Systems.” 2016. Thesis, University of Illinois – Chicago. Accessed March 07, 2021.
http://hdl.handle.net/10027/20884.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Bernasconi, Anna. “Building Deductive Proofs of LTL Properties for Iteratively Refined Systems.” 2016. Web. 07 Mar 2021.
Vancouver:
Bernasconi A. Building Deductive Proofs of LTL Properties for Iteratively Refined Systems. [Internet] [Thesis]. University of Illinois – Chicago; 2016. [cited 2021 Mar 07].
Available from: http://hdl.handle.net/10027/20884.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Bernasconi A. Building Deductive Proofs of LTL Properties for Iteratively Refined Systems. [Thesis]. University of Illinois – Chicago; 2016. Available from: http://hdl.handle.net/10027/20884
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Delaware
30.
Zirkel, Timothy K.
Formally verifying the accuracy of numerical approximations in scientific software.
Degree: PhD, University of Delaware, Department of Computer and Information Sciences, 2014, University of Delaware
URL: http://udspace.udel.edu/handle/19716/13434
► Numerical computation has broad application to a variety of fields. Typically a numerical method yields an approximation to an exact mathematical value, since programs cannot…
(more)
▼ Numerical computation has broad application to a variety of fields. Typically a numerical method yields an approximation to an exact mathematical value, since programs cannot generally handle evaluation of continuous functions at all points. The common way of creating such a method is to discretize continuous functions by restricting them to a mesh. Performing calculations on the mesh provides an approximation to performing calculations on the original function. However, this introduces error. While not the only source of error (round-off error in floating-point operations can be a major consideration), the error in the method itself is in some sense more fundamental. In practice, programs utilizing these approximations often contain defects which introduce additional error.
The order of accuracy of a numerical method relates the scheme's error to the discretization parameters. Scientists must know the accuracy of any numerical approximation, and often prove that the method satisfies the claimed order of accuracy by hand. However, the actual code to implement a method might be more complex and veer from the abstract mathematics.
We show that the claimed order of accuracy of a numerical method implemented in a C program can be (largely) automatically verified using formal methods. The automation cannot be complete, because the problem is undecidable in general and because the programmer must provide some annotations relating the code to the underlying mathematics. These annotations can be kept to a minimum. We have extended the Concurrency Intermediate
Verification Language (CIVL) model checker to verify the order of accuracy of a numerical computation. Our method requires annotating C code with information specifying the function and the order of accuracy of the approximation. CIVL parses the annotations with the C code to form a model of the program. The model is symbolically executed, and techniques such as Taylor expansion are then used to relate the program data to the mathematical function. The verifier, with the assistance of a theorem prover, determines either that the assertions hold at all states, or else that they may not hold. If the assertions may not hold, CIVL provides diagnostic information.
Advisors/Committee Members: Siegel, Stephen F..
Subjects/Keywords: Numerical calculations – Verification.; Computer software – Verification.; C (Computer program language); Formal methods (Computer science)
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zirkel, T. K. (2014). Formally verifying the accuracy of numerical approximations in scientific software. (Doctoral Dissertation). University of Delaware. Retrieved from http://udspace.udel.edu/handle/19716/13434
Chicago Manual of Style (16th Edition):
Zirkel, Timothy K. “Formally verifying the accuracy of numerical approximations in scientific software.” 2014. Doctoral Dissertation, University of Delaware. Accessed March 07, 2021.
http://udspace.udel.edu/handle/19716/13434.
MLA Handbook (7th Edition):
Zirkel, Timothy K. “Formally verifying the accuracy of numerical approximations in scientific software.” 2014. Web. 07 Mar 2021.
Vancouver:
Zirkel TK. Formally verifying the accuracy of numerical approximations in scientific software. [Internet] [Doctoral dissertation]. University of Delaware; 2014. [cited 2021 Mar 07].
Available from: http://udspace.udel.edu/handle/19716/13434.
Council of Science Editors:
Zirkel TK. Formally verifying the accuracy of numerical approximations in scientific software. [Doctoral Dissertation]. University of Delaware; 2014. Available from: http://udspace.udel.edu/handle/19716/13434
◁ [1] [2] [3] [4] [5] … [11] ▶
.