You searched for +publisher:"Virginia Tech" +contributor:("Shukla, Sandeep K.")
.
Showing records 1 – 30 of
85 total matches.
◁ [1] [2] [3] ▶
1.
Chattopadhyay, Arijit.
Dynamic Invariant Generation for Concurrent Programs.
Degree: MS, Computer Engineering, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/49103
► We propose a fully automated and dynamic method for generating likely invariants from multithreaded programs and then leveraging these invariants to infer atomic regions and…
(more)
▼ We propose a fully automated and dynamic method for generating likely invariants from multithreaded programs and then leveraging these invariants to infer atomic regions and diagnose concurrency errors in the software code. Although existing methods for dynamic invariant generation perform reasonably well on sequential programs, for multithreaded programs, their effectiveness often reduces dramatically in terms of both the number of invariants that they can generate and the likelihood of them being true invariants. We solve this problem by developing a new dynamic invariant generator, which consists of a new LLVM based code instrumentation tool, an INSPECT based thread interleaving explorer, and a customized inference engine inside Daikon. We have evaluated the resulting system on public domain multithreaded C/C++ benchmarks. Our experiments show that the new method is effective in generating high-quality invariants. Furthermore, the state and transition invariants generated by our new method have been proved useful both in error diagnosis and in identifying likely atomic regions in the concurrent software code.
Advisors/Committee Members: Wang, Chao (committeechair), Shukla, Sandeep K. (committee member), Hsiao, Michael S. (committee member).
Subjects/Keywords: Concurrency; Likely Invariant; Dynamic Invariant Generation; Partial Order Reduction; Error Diagnosis; Atomic Region
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chattopadhyay, A. (2014). Dynamic Invariant Generation for Concurrent Programs. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/49103
Chicago Manual of Style (16th Edition):
Chattopadhyay, Arijit. “Dynamic Invariant Generation for Concurrent Programs.” 2014. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/49103.
MLA Handbook (7th Edition):
Chattopadhyay, Arijit. “Dynamic Invariant Generation for Concurrent Programs.” 2014. Web. 26 Feb 2021.
Vancouver:
Chattopadhyay A. Dynamic Invariant Generation for Concurrent Programs. [Internet] [Masters thesis]. Virginia Tech; 2014. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/49103.
Council of Science Editors:
Chattopadhyay A. Dynamic Invariant Generation for Concurrent Programs. [Masters Thesis]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/49103

Virginia Tech
2.
Bakshi, Dhrumeel.
Techniques for Seed Computation and Testability Enhancement for Logic Built-In Self Test.
Degree: MS, Electrical and Computer Engineering, 2012, Virginia Tech
URL: http://hdl.handle.net/10919/35474
► With the increase of device complexity and test-data volume required to guarantee adequate defect coverage, external testing is becoming increasingly difficult and expensive. Logic Built-in…
(more)
▼ With the increase of device complexity and test-data volume required to guarantee adequate defect coverage, external testing is becoming increasingly difficult and expensive. Logic Built-in Self Test (LBIST) is a viable alternative test strategy as it helps reduce dependence on an elaborate external test equipment, enables the application of a large number of random tests, and allows for at-speed testing. The main problem with LBIST is suboptimal fault coverage achievable with random vectors. LFSR reseeding is used to increase the coverage. However, to achieve satisfactory coverage, one often needs a large number of seeds. Computing a small number of seeds for LBIST reseeding still remains a tremendous challenge, since the vectors needed to detect all faults may be spread across the huge LFSR vector space. In this work, we propose new methods to enable the computation of a small number of LFSR seeds to cover all stuck-at faults as a first-order satisfiability problem involving extended theories. We present a technique based on SMT (Satisfiability Modulo Theories) with the theory of bit-vectors to combine the tasks of test-generation and seed computation. We describe a seed reduction flow which is based on the `chaining' of faults instead of pre-computed vectors. We experimentally demonstrate that our method can produce very small sets of seeds for complete stuck-at fault coverage. Additionally, we present methods for inserting test-points to enhance the testability of a circuit in such a way as to allow even further reduction in the number of seeds.
Advisors/Committee Members: Hsiao, Michael S. (committeechair), Schaumont, Patrick Robert (committee member), Shukla, Sandeep K. (committee member).
Subjects/Keywords: Satisfiability Modulo Theories (SMT); LFSR Reseeding; Logic Built-In Self Test (LBIST); Integer Linear Programming (ILP); Test-point Insertion
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bakshi, D. (2012). Techniques for Seed Computation and Testability Enhancement for Logic Built-In Self Test. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/35474
Chicago Manual of Style (16th Edition):
Bakshi, Dhrumeel. “Techniques for Seed Computation and Testability Enhancement for Logic Built-In Self Test.” 2012. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/35474.
MLA Handbook (7th Edition):
Bakshi, Dhrumeel. “Techniques for Seed Computation and Testability Enhancement for Logic Built-In Self Test.” 2012. Web. 26 Feb 2021.
Vancouver:
Bakshi D. Techniques for Seed Computation and Testability Enhancement for Logic Built-In Self Test. [Internet] [Masters thesis]. Virginia Tech; 2012. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/35474.
Council of Science Editors:
Bakshi D. Techniques for Seed Computation and Testability Enhancement for Logic Built-In Self Test. [Masters Thesis]. Virginia Tech; 2012. Available from: http://hdl.handle.net/10919/35474

Virginia Tech
3.
Misra, Supratik Kumar.
Efficient Graph Techniques for Partial Scan Pattern Debug and Bounded Model Checkers.
Degree: MS, Electrical and Computer Engineering, 2012, Virginia Tech
URL: http://hdl.handle.net/10919/31153
► Continuous advances in VLSI technology have led to more complex digital designs and shrinking transistor sizes. Due to these developments, design verification and manufacturing test…
(more)
▼ Continuous advances in VLSI technology have led to more complex digital designs and shrinking transistor sizes. Due to these developments, design verification and manufacturing test have gained more importance and 70 % of the design expenditure in on validation processes. Electronic Design Automation (EDA) tools play a huge role in the validation process with various verification and test tools. Their efficiency have a high impact in saving time and money in this competitive market. Direct Acyclic Graphs (DAGs) are the backbone for most of the EDA tools. DAG is the most efficient data structure to store circuit information and also have efficient backt traversing structure which help in developing reasoning/ debugging tools.
In this thesis, we focus on two such EDA tools using graphs as their underlying structure for circuit information storage
• Scan pattern Debugger for Partial Scan Designs
• Circuit SAT Bounded Model Checkers
We developed a complete Interactive Scan Pattern Debugger Suite currently being used in the industry for next generation microprocessor design. The back end is an implication graph based sequential logic simulator which creates a Debug Implication Graph during the logic simulation of the failing patterns. An efficient node traversal mechanism across time frames, in the DIG, is used to perform the root-cause analysis for the failing scan-cells. In addition, the debugger provides visibility into the circuit internals to understand and fix the root-cause. We integrated the proposed technique into the scan ATPG flow for industrial microprocessor designs. We were able to resolve the First Silicon logical pattern failures within hours, which would have otherwise taken a few days of manual effort for root-causing the failure, understanding the root-cause and fixing it.
For our circuit SAT implementation, we replace the internal implication graph used by the SAT solver with our debug implication graph (DIG). There is a high amount of circuit unrolling in circuit SAT/ BMC (Bounded Model Checking) problems which creates copies of the same combinational blocks in multiple time frames. This allows us to use the repetitive circuit structure and club it with the CNF database in the SAT solver. We propose a new data structure to store data in a circuit SAT solver which results up to 90% reduction in number of nodes.
Advisors/Committee Members: Hsiao, Michael S. (committeechair), Shukla, Sandeep K. (committee member), Abbott, A. Lynn (committee member).
Subjects/Keywords: Directed Acyclic Graph; Partial Scan Design; Pattern Debugger; Implication Graphs
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Misra, S. K. (2012). Efficient Graph Techniques for Partial Scan Pattern Debug and Bounded Model Checkers. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/31153
Chicago Manual of Style (16th Edition):
Misra, Supratik Kumar. “Efficient Graph Techniques for Partial Scan Pattern Debug and Bounded Model Checkers.” 2012. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/31153.
MLA Handbook (7th Edition):
Misra, Supratik Kumar. “Efficient Graph Techniques for Partial Scan Pattern Debug and Bounded Model Checkers.” 2012. Web. 26 Feb 2021.
Vancouver:
Misra SK. Efficient Graph Techniques for Partial Scan Pattern Debug and Bounded Model Checkers. [Internet] [Masters thesis]. Virginia Tech; 2012. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/31153.
Council of Science Editors:
Misra SK. Efficient Graph Techniques for Partial Scan Pattern Debug and Bounded Model Checkers. [Masters Thesis]. Virginia Tech; 2012. Available from: http://hdl.handle.net/10919/31153

Virginia Tech
4.
Iyer, Srikrishna.
A Unifying Interface Abstraction for Accelerated Computing in Sensor Nodes.
Degree: MS, Electrical and Computer Engineering, 2011, Virginia Tech
URL: http://hdl.handle.net/10919/34625
► Hardware-software co-design techniques are very suitable to develop the next generation of sensornet applications, which have high computational demands. By making use of a low…
(more)
▼ Hardware-software co-design techniques are very suitable to develop the next generation of sensornet applications, which have high computational demands. By making use of a low power FPGA, the peak computational performance of a sensor node can be improved without significant degradation of the standby power dissipation. In this contribution, we present a methodology and tool to enable hardware/software co-design for sensor node application development. We present the integration of nesC, a sensornet programming language, with GEZEL, an easy-to-use hardware description language. We describe the hardware/software interface at different levels of abstraction: at the level of the design language, at the level of the co-simulator, and in the hardware implementation. We use a layered, uniform approach that is particularly suited to deal with the heterogeneous interfaces typically found on small embedded processors. We illustrate the strengths of our approach by means of a prototype application: the integration of a hardware-accelerated crypto-application in a nesC application.
Advisors/Committee Members: Schaumont, Patrick Robert (committeechair), Shukla, Sandeep K. (committee member), Yang, Yaling (committee member).
Subjects/Keywords: Communication Interface-Abstraction Architecture; automatic code-generation; TinyOS; nesC; GEZEL; hardware/software co-design; co-processor; wireless sensor nodes; CPU; FPGA
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Iyer, S. (2011). A Unifying Interface Abstraction for Accelerated Computing in Sensor Nodes. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/34625
Chicago Manual of Style (16th Edition):
Iyer, Srikrishna. “A Unifying Interface Abstraction for Accelerated Computing in Sensor Nodes.” 2011. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/34625.
MLA Handbook (7th Edition):
Iyer, Srikrishna. “A Unifying Interface Abstraction for Accelerated Computing in Sensor Nodes.” 2011. Web. 26 Feb 2021.
Vancouver:
Iyer S. A Unifying Interface Abstraction for Accelerated Computing in Sensor Nodes. [Internet] [Masters thesis]. Virginia Tech; 2011. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/34625.
Council of Science Editors:
Iyer S. A Unifying Interface Abstraction for Accelerated Computing in Sensor Nodes. [Masters Thesis]. Virginia Tech; 2011. Available from: http://hdl.handle.net/10919/34625

Virginia Tech
5.
Murali, Dilip Venkateswaran.
Verification of Cyber Physical Systems.
Degree: MS, Computer Engineering, 2013, Virginia Tech
URL: http://hdl.handle.net/10919/23824
► Due to the increasing complexity of today\'s cyber-physical systems, defects become inevitable and harder to detect. The complexity of such software is generally huge, with…
(more)
▼ Due to the increasing complexity of today\'s cyber-physical systems, defects become inevitable and harder to detect. The complexity of such software is generally huge, with millions of lines of code. The impact of failure of such systems could be hazardous. The reliability of the system depends on the effectiveness and rigor of the testing procedures. Verification of the software behind such cyber-physical systems is required to ensure stability and reliability before the systems are deployed in field. We have investigated the verification of the software for Autonomous Underwater Vehicles (AUVs) to ensure safety of the system at any given time in the field. To accomplish this, we identified useful invariants that would aid as monitors in detecting abnormal behavior of the software. Potential invariants were extracted which had to be validated. The investigation attempts to uncover the possibility of performing this method on existing Software verification platforms. This was accomplished on Cloud9, which is built on KLEE and using the Microsoft\'s VCC tool. Experimental results show that this method of extracting invariants can help in identifying new invariants using these two tools and the invariants identified can be used to monitor the behavior of the autonomous vehicles to detect abnormality and failures in the system much earlier thereby improving the reliability of the system. Recommendations for improving software quality were provided. The work also explored safety measures and standards on software for safety critical systems and Autonomous vehicles. Metrics for measuring software complexity and quality along with the requirements to certify AUV software were also presented. The study helps in understanding verification issues, guidelines and certification requirements.
Advisors/Committee Members: Hsiao, Michael S. (committeechair), Abbott, Amos L. (committee member), Shukla, Sandeep K. (committee member).
Subjects/Keywords: Invariants detection; Symbolic Execution; KLEE; Cloud9; VCC
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Murali, D. V. (2013). Verification of Cyber Physical Systems. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/23824
Chicago Manual of Style (16th Edition):
Murali, Dilip Venkateswaran. “Verification of Cyber Physical Systems.” 2013. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/23824.
MLA Handbook (7th Edition):
Murali, Dilip Venkateswaran. “Verification of Cyber Physical Systems.” 2013. Web. 26 Feb 2021.
Vancouver:
Murali DV. Verification of Cyber Physical Systems. [Internet] [Masters thesis]. Virginia Tech; 2013. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/23824.
Council of Science Editors:
Murali DV. Verification of Cyber Physical Systems. [Masters Thesis]. Virginia Tech; 2013. Available from: http://hdl.handle.net/10919/23824

Virginia Tech
6.
Zuo, Yongbo.
Fair Comparison of ASIC Performance for SHA-3 Finalists.
Degree: MS, Electrical and Computer Engineering, 2012, Virginia Tech
URL: http://hdl.handle.net/10919/33446
► In the last few decades, secure algorithms have played an irreplaceable role in the protection of private information, such as applications of AES on modems,…
(more)
▼ In the last few decades, secure algorithms have played an irreplaceable role in the protection of private information, such as applications of AES on modems, as well as online bank transactions. The increasing application of secure algorithms on hardware has made implementations on ASIC benchmarks extremely important. Although all kinds of secure algorithms have been implemented into various devices, the effects from different constraints on ASIC implementation performance have never been explored before.
In order to analyze the effects from different constraints for secure algorithms, SHA-3 finalists, which includes Blake, Groestl, Keccak, JH, and Skein, have been chosen as the ones to be implemented for experiments in this thesis.
This thesis has first explored the effects of different synthesis constraints on ASIC performance, such as the analysis of performance when it is constrained for frequency, or maximum area, etc. After that, the effects of choosing various standard libraries were tested, for instance, the performance of UMC 130nm and IBM 130nm standard libraries have been compared. Additionally, the effects of different technologies have been analyzed, such as 65nm, 90nm, 130nm and 180nm of UMC libraries. Finally, in order to further understand the effects, experiments for post-layout analysis has been explored. While some algorithms remain unaffected by floor plan shapes, others have shown preference for a specific shape, such as JH, which shows a 12% increase in throughput/area with a 1:2 rectangle compared to a square.
Throughout this thesis, the effects of different ASIC implementation factors have been comprehensively explored, as well as the details of the methodology, metrics, and the framework of the experiments. Finally, detailed experiment results and analysis will be discussed in the following chapters.
Advisors/Committee Members: Nazhandali, Leyla (committeechair), Schaumont, Patrick Robert (committee member), Shukla, Sandeep K. (committee member).
Subjects/Keywords: SHA; NIST; ASIC; Finalist; Hardware; Encryption; Hash; Cipher; Key
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zuo, Y. (2012). Fair Comparison of ASIC Performance for SHA-3 Finalists. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/33446
Chicago Manual of Style (16th Edition):
Zuo, Yongbo. “Fair Comparison of ASIC Performance for SHA-3 Finalists.” 2012. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/33446.
MLA Handbook (7th Edition):
Zuo, Yongbo. “Fair Comparison of ASIC Performance for SHA-3 Finalists.” 2012. Web. 26 Feb 2021.
Vancouver:
Zuo Y. Fair Comparison of ASIC Performance for SHA-3 Finalists. [Internet] [Masters thesis]. Virginia Tech; 2012. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/33446.
Council of Science Editors:
Zuo Y. Fair Comparison of ASIC Performance for SHA-3 Finalists. [Masters Thesis]. Virginia Tech; 2012. Available from: http://hdl.handle.net/10919/33446

Virginia Tech
7.
Kracht, Matthew Wallace.
Real-Time Embedded Software Modeling and Synthesis using Polychronous Data Flow Languages.
Degree: MS, Computer Engineering, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/46866
► As embedded software and platforms become more complicated, many safety properties are left to simulation and testing. MRICDF is a formal polychronous language used to…
(more)
▼ As embedded software and platforms become more complicated, many safety properties are left to simulation and testing. MRICDF is a formal polychronous language used to guarantee certain safety properties and alleviate the burden of software development and testing. We propose real-time extensions to MRICDF so that temporal properties of embedded systems can also be proven. We adapt the extended precedence encoding technique of Prelude and expand upon current schedulability analysis techniques for multi-periodic real-time systems.
Advisors/Committee Members: Shukla, Sandeep K. (committeechair), Wang, Chao (committee member), Clancy, Thomas Charles (committee member).
Subjects/Keywords: Synchronous Languages; Real-Time Systems; Schedulability Analysis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kracht, M. W. (2014). Real-Time Embedded Software Modeling and Synthesis using Polychronous Data Flow Languages. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/46866
Chicago Manual of Style (16th Edition):
Kracht, Matthew Wallace. “Real-Time Embedded Software Modeling and Synthesis using Polychronous Data Flow Languages.” 2014. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/46866.
MLA Handbook (7th Edition):
Kracht, Matthew Wallace. “Real-Time Embedded Software Modeling and Synthesis using Polychronous Data Flow Languages.” 2014. Web. 26 Feb 2021.
Vancouver:
Kracht MW. Real-Time Embedded Software Modeling and Synthesis using Polychronous Data Flow Languages. [Internet] [Masters thesis]. Virginia Tech; 2014. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/46866.
Council of Science Editors:
Kracht MW. Real-Time Embedded Software Modeling and Synthesis using Polychronous Data Flow Languages. [Masters Thesis]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/46866

Virginia Tech
8.
Messaoud, Safa.
Translating Discrete Time SIMULINK to SIGNAL.
Degree: MS, Computer Engineering, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/49299
► As Cyber Physical Systems (CPS) are getting more complex and safety critical, Model Based Design (MBD), which consists of building formal models of a system…
(more)
▼ As Cyber Physical Systems (CPS) are getting more complex and safety critical, Model Based
Design (MBD), which consists of building formal models of a system in order to be used in
verification and correct-by-construction code generation, is becoming a promising methodology
for the development of the embedded software of such systems. This design paradigm
significantly reduces the development cost and time while guaranteeing better robustness,
capability and correctness with respect to the original specifications, when compared with
the traditional ad-hoc design methods. SIMULINK has been the most popular tool for embedded
control design in research as well as in industry, for the last decades. As SIMULINK
does not have formal semantics, the application of the model based design methodology and
tools to its models is very limited. In this thesis, we present a semantic translator that
transform discrete time SIMULINK models into SIGNAL programs. The choice of SIGNAL
is motivated by its polychronous formalism that enhances synchronous programming with
asynchronous concurrency, as well as, by the ability of its compiler of generating deterministic
multi thread code. Our translation involves three major steps: clock inference, type
inference and hierarchical top-down translation. We validate the semantic preservation of
our prototype tool by testing it on different SIMULINK models.
Advisors/Committee Members: Shukla, Sandeep K. (committeechair), Hsiao, Michael S. (committee member), Paul, JoAnn Mary (committee member).
Subjects/Keywords: SIMULINK; SIGNAL; Embedded Software; Code Generation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Messaoud, S. (2014). Translating Discrete Time SIMULINK to SIGNAL. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/49299
Chicago Manual of Style (16th Edition):
Messaoud, Safa. “Translating Discrete Time SIMULINK to SIGNAL.” 2014. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/49299.
MLA Handbook (7th Edition):
Messaoud, Safa. “Translating Discrete Time SIMULINK to SIGNAL.” 2014. Web. 26 Feb 2021.
Vancouver:
Messaoud S. Translating Discrete Time SIMULINK to SIGNAL. [Internet] [Masters thesis]. Virginia Tech; 2014. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/49299.
Council of Science Editors:
Messaoud S. Translating Discrete Time SIMULINK to SIGNAL. [Masters Thesis]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/49299

Virginia Tech
9.
Hu, Wei.
Sufficiency-based Filtering of Invariants for Sequential Equivalence Checking.
Degree: MS, Electrical and Computer Engineering, 2011, Virginia Tech
URL: http://hdl.handle.net/10919/31121
► Verification, as opposed to Testing and Post-Silicon Validation, is a critical step for Integrated Circuits (IC) Design, answering the question â Are we designing the…
(more)
▼ Verification, as opposed to Testing and Post-Silicon Validation, is a critical step for Integrated Circuits (IC) Design, answering the question â Are we designing the right function?â before the chips are manufactured. One of the core areas of Verification is Equivalence Checking (EC), which is a special yet independent case of Model Checking (MC). Equivalence Checking aims to prove that two circuits, when fed with the same inputs, produce the exact same outputs. There are broadly two ways to conduct Equivalence Checking, simulation and Formal Equivalence Checking. Simulation requires one to try out different input combinations and observe if the two circuits produce the same output. Obviously, since it is not possible to enumerate all combinations of different inputs, completeness cannot be guaranteed. On the other hand, Formal Equivalence Checking can achieve 100% confidence. As the number of gates and in particular, the number of flip-flops, in circuits has grown tremendously during the recent years, the problem of Formal Equivalence Checking has become much harder â A recent evaluation of a general-case Formal Equivalence Checking engine [1] shows that about 15% of industrial designs cannot be verified after a typical sequential synthesis flow. As a result, a lot of attention on Formal Equivalence Checking has been drawn both academically and industrially.
For years Combinational Equivalence Checking(CEC) has been the pervasive framework for Formal Equivalence Checking(FEC) in the industry. However, due to the limitation of being able to verify circuits only with 1:1 flip-flop pairing, a pure CEC-based methodology requires a full regression of the verification process, meaning that performing sequential optimizations like retiming or FSM re-encoding becomes somewhat of a bottleneck in the design cycle [2]. Therefore, a more powerful framework â Sequential Equivalence Checking (SEC)â has been gradually adopted in industry.
In this thesis, we target on Sequential Equivalence Checking by finding efficient yet powerful group of relationships (invariants) among the signals of the two circuits being compared. In order to achieve a high success rate on some of the extremely hard-to-verify circuits, we are interested in both two-node and multi-node (up to 4 nodes) invariants. Also we are interested in invariants among both flip-flops and internal signals. For large circuits, there can be too many potential invariants requiring much time to prove. However, we observed that a large portion of them may not even contribute to equivalence checking. Moreover, equivalence checking can be significantly helped if there exists a method to check if a subset of potential invariants would be sufficient (e.g., whether two-nodes are enough or multi-nodes are also needed) prior to the verification step. Therefore, we propose two sufficiency-based approaches to identify useful invariants out of the initial potential invariants for SEC. Experimental results show that our approach can either demonstrate insufficiency of the…
Advisors/Committee Members: Hsiao, Michael S. (committeechair), Shukla, Sandeep K. (committee member), Schaumont, Patrick Robert (committee member).
Subjects/Keywords: Invariant filtering; Assume and Verify; Boolean Satisfiability(SAT); Sequential Equivalence Checking(SEC)
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hu, W. (2011). Sufficiency-based Filtering of Invariants for Sequential Equivalence Checking. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/31121
Chicago Manual of Style (16th Edition):
Hu, Wei. “Sufficiency-based Filtering of Invariants for Sequential Equivalence Checking.” 2011. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/31121.
MLA Handbook (7th Edition):
Hu, Wei. “Sufficiency-based Filtering of Invariants for Sequential Equivalence Checking.” 2011. Web. 26 Feb 2021.
Vancouver:
Hu W. Sufficiency-based Filtering of Invariants for Sequential Equivalence Checking. [Internet] [Masters thesis]. Virginia Tech; 2011. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/31121.
Council of Science Editors:
Hu W. Sufficiency-based Filtering of Invariants for Sequential Equivalence Checking. [Masters Thesis]. Virginia Tech; 2011. Available from: http://hdl.handle.net/10919/31121

Virginia Tech
10.
Rafeei, Lalleh.
Fast Approximation Framework for Timing and Power Analysis of Ultra-Low-Voltage Circuits.
Degree: MS, Electrical and Computer Engineering, 2012, Virginia Tech
URL: http://hdl.handle.net/10919/31678
► Ultra-Low-Voltage operation, which can be considered an extreme case of voltage scaling, can greatly reduce the power consumption of circuits. Despite the fact that Ultra-Low-Voltage…
(more)
▼ Ultra-Low-Voltage operation, which can be considered an extreme case of voltage scaling, can greatly reduce the power consumption of circuits. Despite the fact that Ultra-Low-Voltage operation has been proven to be very effective by several successful prototypes in recent years, there is no fast, effective, and comprehensive technique for designers to estimate power and delay of a design operating in the Ultra-Low-Voltage region. While some frameworks and mathematical models exist to estimate power or delay, certain limitations exist, such as being applicable to either power or delay, or within a certain region of transistor operation. This thesis presents a simulation framework that can quickly and accurately characterize a circuit from nominal voltage all the way down into the subthreshold region. The framework uses the nominal frequency and power of a target circuit, which can be obtained using gate-level or transistor-level simulation tools as well as normalized ring oscillator curves to predict delay and power characteristics at lower operating voltages. A specific contribution of this thesis is to introduce a weighted average method, which is a major improvement to a previously published form of this framework. Another contribution is that the amount of process variation in ULV regions of a circuit can be estimated using the proposed framework. The weighted averages framework takes into account the types of gates that are used in the circuit and critical path to give a more accurate power and timing characterization. Despite being many orders of magnitude lower than the nominal voltage, the errors are no greater than 11.27 percent for circuit delay, 16.96 percent for active energy, and 4.86 percent for leakage power for the weighted averages technique. This is in contrast to the original framework which has a maximum error of 39.75, 17.60, and 8.90 percent for circuit delay, active energy, and leakage power, respectively. To validate our framework, a detailed analysis is given in the presence of a variety of design parameters such as fanout, transistor widths, et cetera. In addition, we also validate our framework for a range of sequential benchmark circuits.
Advisors/Committee Members: Nazhandali, Leyla (committeechair), Meehan, Kathleen (committee member), Shukla, Sandeep K. (committee member).
Subjects/Keywords: subthreshold; approximation framework; power; timing; CMOS; VLSI; Ultra-Low-Voltage
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rafeei, L. (2012). Fast Approximation Framework for Timing and Power Analysis of Ultra-Low-Voltage Circuits. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/31678
Chicago Manual of Style (16th Edition):
Rafeei, Lalleh. “Fast Approximation Framework for Timing and Power Analysis of Ultra-Low-Voltage Circuits.” 2012. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/31678.
MLA Handbook (7th Edition):
Rafeei, Lalleh. “Fast Approximation Framework for Timing and Power Analysis of Ultra-Low-Voltage Circuits.” 2012. Web. 26 Feb 2021.
Vancouver:
Rafeei L. Fast Approximation Framework for Timing and Power Analysis of Ultra-Low-Voltage Circuits. [Internet] [Masters thesis]. Virginia Tech; 2012. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/31678.
Council of Science Editors:
Rafeei L. Fast Approximation Framework for Timing and Power Analysis of Ultra-Low-Voltage Circuits. [Masters Thesis]. Virginia Tech; 2012. Available from: http://hdl.handle.net/10919/31678

Virginia Tech
11.
Duong, Khanh Viet.
On Enhancing Deterministic Sequential ATPG.
Degree: MS, Electrical and Computer Engineering, 2011, Virginia Tech
URL: http://hdl.handle.net/10919/31283
► This thesis presents four different techniques for improving the average-case performance of deterministic sequential circuit Automatic Test Patterns Generators (ATPG). Three techniques make use of…
(more)
▼ This thesis presents four different techniques for improving the average-case performance of deterministic sequential circuit Automatic Test Patterns Generators (ATPG). Three techniques make use of information gathered during test generation to help identify more unjustifiable states with higher percentage of â donâ t careâ value. An approach for reducing the search space of the ATPG was introduced. The technique can significantly reduce the size of the search space but cannot ensure the completeness of the search. Results on ISCASâ 85 benchmark circuits show that all of the proposed techniques allow for better fault detection in shorter amounts of time. These techniques, when used together, produced test vectors with high fault coverages. Also investigated in this thesis is the Decision Inversion Problem which threatens the completeness of ATPG tools such as HITEC or ATOMS. We propose a technique which can eliminate this problem by forcing the ATPG to consider search space with certain flip-flops untouched. Results show that our technique eliminated the decision inversion problem, ensuring the soundness of the search algorithm under the 9-valued logic model.
Advisors/Committee Members: Hsiao, Michael S. (committeechair), Ha, Dong Sam (committee member), Shukla, Sandeep K. (committee member).
Subjects/Keywords: Automatic Test Pattern Generation; Logic Testing; Sequential Circuits
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Duong, K. V. (2011). On Enhancing Deterministic Sequential ATPG. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/31283
Chicago Manual of Style (16th Edition):
Duong, Khanh Viet. “On Enhancing Deterministic Sequential ATPG.” 2011. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/31283.
MLA Handbook (7th Edition):
Duong, Khanh Viet. “On Enhancing Deterministic Sequential ATPG.” 2011. Web. 26 Feb 2021.
Vancouver:
Duong KV. On Enhancing Deterministic Sequential ATPG. [Internet] [Masters thesis]. Virginia Tech; 2011. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/31283.
Council of Science Editors:
Duong KV. On Enhancing Deterministic Sequential ATPG. [Masters Thesis]. Virginia Tech; 2011. Available from: http://hdl.handle.net/10919/31283

Virginia Tech
12.
Sinha, Ambuj Sudhir.
Design Techniques for Side-channel Resistant Embedded Software.
Degree: MS, Electrical and Computer Engineering, 2011, Virginia Tech
URL: http://hdl.handle.net/10919/34465
► Side Channel Attacks (SCA) are a class of passive attacks on cryptosystems that exploit implementation characteristics of the system. Currently, a lot of research is…
(more)
▼ Side Channel Attacks (SCA) are a class of passive attacks on cryptosystems that exploit implementation characteristics of the system. Currently, a lot of research is focussed towards developing countermeasures to side channel attacks. In this thesis, we address two challenges that are an inherent part of the efficient implementation of SCA countermeasures. While designing a system, design choices made for enhancing the efficiency or performance of the system can also affect the side channel security of the system. The first challenge is that the effect of different design choices on the side channel resistance of a system is currently not well understood. It is important to understand these effects in order to develop systems that are both secure and efficient. A second problem with incorporating SCA countermeasures is the increased design complexity. It is often difficult and time consuming to integrate an SCA countermeasure in a larger system.
In this thesis, we explore that above mentioned problems from the point of view of developing embedded software that is resistant to power based side channel attacks. Our first work is an evaluation of different software AES implementations, from the perspective of side channel resistance, that shows the effect of design choices on the security and performance of the implementation. Next we present work that identifies the problems that arise while designing software for a particular type of SCA resistant architecture - the Virtual Secure Circuit. We provide a solution in terms of a methodology that can be used for developing software for such a system - and also demonstrate that this methodology can be conveniently automated - leading to swifter and easier software development for side channel resistant designs.
Advisors/Committee Members: Schaumont, Patrick Robert (committeechair), Shukla, Sandeep K. (committee member), Hsiao, Michael S. (committee member).
Subjects/Keywords: Bitslice Cryptography; Side Channel Attacks; Virtual Secure Circuit; Secure Embedded Systems; Side-channel Countermeasures
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sinha, A. S. (2011). Design Techniques for Side-channel Resistant Embedded Software. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/34465
Chicago Manual of Style (16th Edition):
Sinha, Ambuj Sudhir. “Design Techniques for Side-channel Resistant Embedded Software.” 2011. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/34465.
MLA Handbook (7th Edition):
Sinha, Ambuj Sudhir. “Design Techniques for Side-channel Resistant Embedded Software.” 2011. Web. 26 Feb 2021.
Vancouver:
Sinha AS. Design Techniques for Side-channel Resistant Embedded Software. [Internet] [Masters thesis]. Virginia Tech; 2011. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/34465.
Council of Science Editors:
Sinha AS. Design Techniques for Side-channel Resistant Embedded Software. [Masters Thesis]. Virginia Tech; 2011. Available from: http://hdl.handle.net/10919/34465

Virginia Tech
13.
Prabhu, Sarvesh P.
An Efficient 2-Phase Strategy to Achieve High Branch Coverage.
Degree: MS, Electrical and Computer Engineering, 2012, Virginia Tech
URL: http://hdl.handle.net/10919/40931
► Symbolic execution-based test generation is gaining popularity for software test generation. The increasing complexity of the software program is posing new challenges in software execution-based…
(more)
▼ Symbolic execution-based test generation is gaining popularity for software test generation.
The increasing complexity of the software program is posing new challenges in software
execution-based test generation because of the path explosion problem. We present a new
2-phase symbolic execution driven strategy that achieves high branch coverage in software quickly. Phase 1 follows a greedy approach that quickly covers as many branches as possible
by exploring each branch through its corresponding shortest path prefix. Phase 2 covers the remaining branches that are left uncovered if the shortest path to the branch was infeasible. In Phase 1, a basic conflict driven learning is used to skip all the paths that may have any of the earlier encountered conflicting conditions, while in Phase 2, a more intelligent
conflict driven learning is used to skip regions that do not have a feasible path to any unexplored branch. This results in considerable reduction in unnecessary SMT solver calls.
Experimental results show that significant speedup can be achieved, effectively reducing the
time to detect a bug and providing higher branch coverage for a fixed time out period than previous techniques.
Advisors/Committee Members: Hsiao, Michael S. (committeechair), Shukla, Sandeep K. (committee member), Yang, Yaling (committee member).
Subjects/Keywords: Branch Coverage; Conflict-driven Learning; Symbolic Execution; Software Testing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Prabhu, S. P. (2012). An Efficient 2-Phase Strategy to Achieve High Branch Coverage. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/40931
Chicago Manual of Style (16th Edition):
Prabhu, Sarvesh P. “An Efficient 2-Phase Strategy to Achieve High Branch Coverage.” 2012. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/40931.
MLA Handbook (7th Edition):
Prabhu, Sarvesh P. “An Efficient 2-Phase Strategy to Achieve High Branch Coverage.” 2012. Web. 26 Feb 2021.
Vancouver:
Prabhu SP. An Efficient 2-Phase Strategy to Achieve High Branch Coverage. [Internet] [Masters thesis]. Virginia Tech; 2012. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/40931.
Council of Science Editors:
Prabhu SP. An Efficient 2-Phase Strategy to Achieve High Branch Coverage. [Masters Thesis]. Virginia Tech; 2012. Available from: http://hdl.handle.net/10919/40931

Virginia Tech
14.
Munagani, Indira Priya Darshini.
Mining Rare Features in Fingerprints using Core points and Triplet-based Features.
Degree: MS, Computer Engineering, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/24784
► A fingerprint matching algorithm with a novel set of matching parameters based on core points and triangular descriptors is proposed to discover rarity in fingerprints.…
(more)
▼ A fingerprint matching algorithm with a novel set of matching parameters based on core points and triangular descriptors is proposed to discover rarity in fingerprints. The algorithm uses a mathematical and statistical approach to discover rare features in fingerprints which provides scientific validation for both ten-print and latent fingerprint evidence. A feature is considered rare if it is statistically uncommon; that is, the rare feature should be unique among N (N>100) randomly sampled prints. A rare feature in a fingerprint has higher discriminatory power when it is identified in a print (latent or otherwise). In the case of latent fingerprint matching, the enhanced discriminatory power from the rare features can help in delivering a confident court judgment. In addition to mining the rare features, a parallel algorithm for fingerprint matching on GPUs is also proposed to reduce the run-time of fingerprint matching on larger databases. Results show that 1) matching algorithm is useful in eliminating false matches. 2) each of the 30 fingerprints randomly selected to mine rare features have a small set of highly distinctive statistically rare features some of whose occurrence is one in 1000 fingerprints. 3) the parallel algorithm implemented on GPUs for larger databases is around 40 times faster than the sequential algorithm.
Advisors/Committee Members: Hsiao, Michael S. (committeechair), Abbott, Amos L. (committee member), Shukla, Sandeep K. (committee member).
Subjects/Keywords: Fingerprints; Rare Features; Rarity; Latent; Core Points; Triplets; GPU
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Munagani, I. P. D. (2014). Mining Rare Features in Fingerprints using Core points and Triplet-based Features. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/24784
Chicago Manual of Style (16th Edition):
Munagani, Indira Priya Darshini. “Mining Rare Features in Fingerprints using Core points and Triplet-based Features.” 2014. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/24784.
MLA Handbook (7th Edition):
Munagani, Indira Priya Darshini. “Mining Rare Features in Fingerprints using Core points and Triplet-based Features.” 2014. Web. 26 Feb 2021.
Vancouver:
Munagani IPD. Mining Rare Features in Fingerprints using Core points and Triplet-based Features. [Internet] [Masters thesis]. Virginia Tech; 2014. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/24784.
Council of Science Editors:
Munagani IPD. Mining Rare Features in Fingerprints using Core points and Triplet-based Features. [Masters Thesis]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/24784

Virginia Tech
15.
Sowers, David Albert.
Architecture for Issuing DoD Mobile Derived Credentials.
Degree: MS, Computer Engineering, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/64351
► With an increase in performance, dependency and ubiquitousness, the necessity for secure mobile device functionality is rapidly increasing. Authentication of an individual's identity is the…
(more)
▼ With an increase in performance, dependency and ubiquitousness, the necessity for secure mobile device functionality is rapidly increasing. Authentication of an individual's identity is the fundamental component of physical and logical access to secure facilities and information systems. Identity management within the Department of Defense relies on Public Key Infrastructure implemented through the use of X.509 certificates and private keys issued on smartcards called Common Access Cards (CAC). However, use of CAC credentials on smartphones is difficult due to the lack of effective smartcard reader integration with mobile devices. The creation of a mobile phone derived credential, a new X.509 certificate and key pair based off the credentials of the CAC certificates, would eliminate the need for CAC integration with mobile devices This thesis describes four architectures for securely and efficiently generating and delivering a derived credential to a mobile device for secure communications with mobile applications. Two architectures generate credentials through a software cryptographic module providing a LOA-3 credential. The other two architectures provide a LOA-4 credential by utilizing a hardware cryptographic module for the generation of the key pair. In two of the architectures, the Certificate Authority']s (CA) for the new derived credentials is the digital signature certificate from the CAC. The other two architectures utilize a newly created CA, which would reside on the DoD network and be used to approve and sign the derived credentials. Additionally, this thesis demonstrates the prototype implementations of the two software generated derived credential architectures using CAC authentication and outlines the implementation of the hardware cryptographic derived credential.
Advisors/Committee Members: Clancy, Thomas Charles (committeechair), Silva, Luiz A. (committee member), Shukla, Sandeep K. (committee member).
Subjects/Keywords: Derived Credentials; Public Key Infrastructure; Common Access Card; Department of Defense; x509; Mobile Phone
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sowers, D. A. (2014). Architecture for Issuing DoD Mobile Derived Credentials. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/64351
Chicago Manual of Style (16th Edition):
Sowers, David Albert. “Architecture for Issuing DoD Mobile Derived Credentials.” 2014. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/64351.
MLA Handbook (7th Edition):
Sowers, David Albert. “Architecture for Issuing DoD Mobile Derived Credentials.” 2014. Web. 26 Feb 2021.
Vancouver:
Sowers DA. Architecture for Issuing DoD Mobile Derived Credentials. [Internet] [Masters thesis]. Virginia Tech; 2014. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/64351.
Council of Science Editors:
Sowers DA. Architecture for Issuing DoD Mobile Derived Credentials. [Masters Thesis]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/64351

Virginia Tech
16.
Huang, Sinan.
Hardware Evaluation of SHA-3 Candidates.
Degree: MS, Electrical and Computer Engineering, 2011, Virginia Tech
URL: http://hdl.handle.net/10919/32932
► Cryptographic hash functions are used extensively in information security, most notably in digital authentication and data integrity verification. Their performance is an important factor of…
(more)
▼ Cryptographic hash functions are used extensively in information security, most notably in digital authentication and data integrity verification. Their performance is an important factor of the overall performance of a secure system. In 2005, some groups of cryptanalysts were making increasingly successful attacks and exploits on the cryptographic hash function, SHA-1, the most widely used hash function of the secure hashing algorithm family. Although these attacks do not work on SHA-2, the next in the series of the secure hashing algorithm family, the National Institute of Standards and Technology still believes that it is necessary to hold a competition to select a new algorithm to be added to the current secure hashing algorithm family. The new algorithm will be chosen through a public competition. The entries will be evaluated with different kinds of criteria, such as security, performance and implementation characteristics. These criteria will not only cover the domain of software, but the domain of hardware as well. This is the motivation of this thesis.
This thesis will describe the experiments and measurements done to evaluate the SHA-3 cryptographic hash function candidatesâ performance on both ASIC and FPGA devices. The methodology, metrics, implementation details, and the framework of the experiments will be described. The results on both hardware devices will be shown and possible future directions will be discussed.
Advisors/Committee Members: Shukla, Sandeep K. (committee member), Nazhandali, Leyla (committeecochair), Schaumont, Patrick Robert (committeecochair).
Subjects/Keywords: Cryptography; Security; SHA-3; Hardware Evaluation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Huang, S. (2011). Hardware Evaluation of SHA-3 Candidates. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/32932
Chicago Manual of Style (16th Edition):
Huang, Sinan. “Hardware Evaluation of SHA-3 Candidates.” 2011. Masters Thesis, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/32932.
MLA Handbook (7th Edition):
Huang, Sinan. “Hardware Evaluation of SHA-3 Candidates.” 2011. Web. 26 Feb 2021.
Vancouver:
Huang S. Hardware Evaluation of SHA-3 Candidates. [Internet] [Masters thesis]. Virginia Tech; 2011. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/32932.
Council of Science Editors:
Huang S. Hardware Evaluation of SHA-3 Candidates. [Masters Thesis]. Virginia Tech; 2011. Available from: http://hdl.handle.net/10919/32932

Virginia Tech
17.
Prabhu, Sarvesh P.
Techniques for Enhancing Test and Diagnosis of Digital Circuits.
Degree: PhD, Computer Engineering, 2015, Virginia Tech
URL: http://hdl.handle.net/10919/51181
► Test and Diagnosis are critical areas in semiconductor manufacturing. Every chip manufactured using a new or premature technology or process needs to be tested for…
(more)
▼ Test and Diagnosis are critical areas in semiconductor manufacturing. Every chip manufactured using a new or premature technology or process needs to be tested for manufacturing defects to ensure defective chips are not sold to the customer. Conventionally, test is done by mounting the chip on an Automated Test Equipment (ATE) and applying test patterns to test for different faults. With shrinking feature sizes, the complexity of the circuits on chip is increasing, which in turn increases the number of test patterns needed to test the chip comprehensively. This increases the test application time which further increases the cost of test, ultimately leading to increase in the cost per device.
Furthermore, chips that fail during test need to be diagnosed to determine the cause of the failure so that the manufacturing process can be improved to increase the yield. With increase in the size and complexity of the circuits, diagnosis is becoming an even more challenging and time consuming process. Fast diagnosis of failing chips can help in reducing the ramp-up to the high volume manufacturing stage and thus reduce the time to market. To reduce the time needed for diagnosis, efficient diagnostic patterns have to be generated that can distinguish between several faults. However, in order to reduce the test application time, the total number of patterns should be minimized. We propose a technique for generating diagnostic patterns that are inherently compact. Experimental results show up to 73% reduction in the number of diagnostic patterns needed to distinguish all faults.
Logic Built-in Self-Test (LBIST) is an alternative methodology for testing, wherein all components needed to test the chip are on the chip itself. This eliminates the need of expensive ATEs and allows for at-speed testing of chips. However, there is hardware overhead incurred in storing deterministic test patterns on chip and failing chips are hard to diagnose in this LBIST architecture due to limited observability. We propose a technique to reduce the number of patterns needed to be stored on chip and thus reduce the hardware overhead. We also propose a new LBIST architecture which increases the diagnosability in LBIST with a minimal hardware overhead. These two techniques overcome the disadvantages of LBIST and can make LBIST more popular solution for testing of chips.
Modern designs may contain a large number of small embedded memories. Memory Built-in Self-Test (MBIST) is the conventional technique of testing memories, but it incurs hardware overhead. Using MBIST for small embedded memories is impractical as the hardware overhead would be significantly high. Test generation for such circuits is difficult because the fault effect needs to be propagated through the memory. We propose a new technique for testing of circuits with embedded memories. By using SMT solver, we model memory at a high level of abstraction using theory of array, while keeping the surrounding logic at gate level. This effectively converts the test generation problem into…
Advisors/Committee Members: Hsiao, Michael S. (committeechair), Wang, Chao (committee member), Shimozono, Mark M. (committee member), Shukla, Sandeep K. (committee member), Abbott, Amos L. (committee member).
Subjects/Keywords: LBIST; LFSR-reseeding; Diagnostic Test Generation; Automated Test Pattern Generation (ATPG); Property Checking
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Prabhu, S. P. (2015). Techniques for Enhancing Test and Diagnosis of Digital Circuits. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/51181
Chicago Manual of Style (16th Edition):
Prabhu, Sarvesh P. “Techniques for Enhancing Test and Diagnosis of Digital Circuits.” 2015. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/51181.
MLA Handbook (7th Edition):
Prabhu, Sarvesh P. “Techniques for Enhancing Test and Diagnosis of Digital Circuits.” 2015. Web. 26 Feb 2021.
Vancouver:
Prabhu SP. Techniques for Enhancing Test and Diagnosis of Digital Circuits. [Internet] [Doctoral dissertation]. Virginia Tech; 2015. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/51181.
Council of Science Editors:
Prabhu SP. Techniques for Enhancing Test and Diagnosis of Digital Circuits. [Doctoral Dissertation]. Virginia Tech; 2015. Available from: http://hdl.handle.net/10919/51181

Virginia Tech
18.
Garlapati, Shravan Kumar Reddy.
Enabling Communication and Networking Technologies for Smart Grid.
Degree: PhD, Computer Engineering, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/56629
► Transforming the aging electric grid to a smart grid is an active area of research in industry and the government. One of the main objectives…
(more)
▼ Transforming the aging electric grid to a smart grid is an active area of research in industry and the government. One of the main objectives of the smart grid is to improve the efficiency of power generation, transmission and distribution and also to improve the stability and the reliability of the grid. In order to achieve this, various processes involved in power generation, transmission, and distribution should be armed with advanced sensor technologies, computing, communication and networking capabilities to an unprecedented level. These high speed data transfer and computational abilities aid power system engineers to obtain wide area measurements, achieve better control of power system operations and improve the reliability of power supply and the efficiency of different power grid operations.
In the process of making the grid smarter, problems existing in traditional grid applications can be identified and solutions have to be developed to fix the identified issues. In this dissertation, two problems that aid power system engineers to meet the above mentioned smart grid's objective are researched. One problem is related to the distribution-side smart grid and the other one is a part of the transmission-side smart grid. Advanced Metering Infrastructure (AMI) is one of the important distribution-side smart grid applications. AMI is a technology where smart meters are installed at customer site which gives the utilities the ability to monitor and collect information related to the amount of electricity, water, and gas consumed by the user.
Many recent research studies suggested the use of 3G cellular CDMA2000 for AMI network as it provides an advanced and cost effective solution for smart grid communications. Taking into account both technical and non-technical factors such as extended lifetime, security, availability and control of the solution, Alliander, an electric utility in Netherlands deployed a private 3G CDMA2000 network for smart metering. Although 3G CDMA2000 satisfies the requirements of smart grid applications, an analysis on the use of the current state of the art 3G CDMA2000 for smart grid applications indicates that its usage results in high percentage of control overhead, high latency and high power consumption for data transfer. As a part of this dissertation, we proposed FLEX-MAC - a new Medium Access Control (MAC) protocol that reduces the latency and overhead in smart meter data collection when compared to 3G CDMA2000 MAC.
As mentioned above the second problem studied in this dissertation is related to the transmission-side grid. Power grid transmission and sub-transmission lines are generally protected by distance relays. After a thorough analysis of U.S. historical blackouts, North American Electric Reliability Council (NERC) has concluded that the hidden failure induced tripping of distance relays is responsible for 70% of the U.S. blackouts. As a part of this dissertation, agent based distance relaying protection scheme is proposed to improve the robustness of distance relays to…
Advisors/Committee Members: Reed, Jeffrey Hugh (committeechair), Wernz, Christian (committee member), Buehrer, R. Michael (committee member), Shukla, Sandeep K. (committee member), Centeno, Virgilio A. (committee member).
Subjects/Keywords: Distance Relays; Hidden Failures; Blackouts; Agents; DFS; IEEE C37.118; Multiple Facility Location; AMI; Spread Spectrum; Markov Chain Analysis; Backward Recursive Dynamic Programming
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Garlapati, S. K. R. (2014). Enabling Communication and Networking Technologies for Smart Grid. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/56629
Chicago Manual of Style (16th Edition):
Garlapati, Shravan Kumar Reddy. “Enabling Communication and Networking Technologies for Smart Grid.” 2014. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/56629.
MLA Handbook (7th Edition):
Garlapati, Shravan Kumar Reddy. “Enabling Communication and Networking Technologies for Smart Grid.” 2014. Web. 26 Feb 2021.
Vancouver:
Garlapati SKR. Enabling Communication and Networking Technologies for Smart Grid. [Internet] [Doctoral dissertation]. Virginia Tech; 2014. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/56629.
Council of Science Editors:
Garlapati SKR. Enabling Communication and Networking Technologies for Smart Grid. [Doctoral Dissertation]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/56629

Virginia Tech
19.
Eldib, Hassan Shoukry.
Constraint Based Program Synthesis for Embedded Software.
Degree: PhD, Computer Engineering, 2015, Virginia Tech
URL: http://hdl.handle.net/10919/55120
► In the world that we live in today, we greatly rely on software in nearly every aspect of our lives. In many critical applications, such…
(more)
▼ In the world that we live in today, we greatly rely on software in nearly every aspect of our lives. In many critical applications, such as in transportation and medical systems, catastrophic consequences could occur in case of buggy software. As the computational power and storage capacity of computer hardware keep increasing, so are the size and complexity of the software. This makes testing and verification increasingly challenging in practice, and consequentially creates a chance for software with critical bugs to find their way into the consumer market.
In this dissertation, I present a set of innovative new methods for automatically verifying, as well as synthesizing, critical software and hardware in embedded computing applications. Based on a set of rigorous formal analysis techniques, my methods can guarantee that the resulting software are efficient and secure as well as provably correct.
Advisors/Committee Members: Wang, Chao (committeechair), Tilevich, Eli (committee member), Schaumont, Patrick Robert (committee member), Hsiao, Michael S. (committee member), Shukla, Sandeep K. (committee member).
Subjects/Keywords: Program Synthesis; Formal Verification; Embedded Software; Security; Cryptography; Side-Channel Attacks and Countermeasures
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Eldib, H. S. (2015). Constraint Based Program Synthesis for Embedded Software. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/55120
Chicago Manual of Style (16th Edition):
Eldib, Hassan Shoukry. “Constraint Based Program Synthesis for Embedded Software.” 2015. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/55120.
MLA Handbook (7th Edition):
Eldib, Hassan Shoukry. “Constraint Based Program Synthesis for Embedded Software.” 2015. Web. 26 Feb 2021.
Vancouver:
Eldib HS. Constraint Based Program Synthesis for Embedded Software. [Internet] [Doctoral dissertation]. Virginia Tech; 2015. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/55120.
Council of Science Editors:
Eldib HS. Constraint Based Program Synthesis for Embedded Software. [Doctoral Dissertation]. Virginia Tech; 2015. Available from: http://hdl.handle.net/10919/55120

Virginia Tech
20.
Nanjundappa, Mahesh.
Formal Techniques for Design and Development of Safety Critical Embedded Systems from Polychronous Models.
Degree: PhD, Computer Engineering, 2015, Virginia Tech
URL: http://hdl.handle.net/10919/73483
► Formally-based design and implementation techniques for complex safety-critical embedded systems are required not only to handle the complexity, but also to provide correctness guarantees. Traditional…
(more)
▼ Formally-based design and implementation techniques for complex safety-critical embedded systems are required not only to handle the complexity, but also to provide correctness guarantees. Traditional design approaches struggle to cope with complexity, and they generally require extensive testing to guarantee correctness. As the designs get larger and more complex, traditional approaches face many limitations. An alternate design approach is to adopt a "correct-by-construction" paradigm and synthesize the desired hardware and software from the high-level descriptions expressed using one of the many formal modeling languages. Since these languages are equipped with formal semantics, formally-based tools can be employed for various analysis. In this dissertation, we adopt one such formal modeling language - MRICDF (Multi-Rate Instantaneous Channel-connected Data Flow). MRICDF is a graphical, declarative, polychronous modeling language, with a formalism that allows the modeler to easily describe multi-clocked systems without the necessity of global clock. Unnecessary synchronizations among concurrent computation entities can be avoided using a polychronous language such as MRICDF. We have explored a Boolean theory-based techniques for synthesizing multi-threaded/concurrent code and extended the technique to improve the performance of synthesized multi-threaded code. We also explored synthesizing ASIPs (Application Specific Instruction Set Processors) from MRICDF models. Further, we have developed formal techniques to identify constructive causality in polychronous models. We have also developed SMT (Satisfiablity Modulo Theory)-based techniques to identify dimensional inconsistencies and to perform value-range analysis of polychronous models.
Advisors/Committee Members: Shukla, Sandeep K. (committeechair), Haghighat, Alireza (committee member), Wang, Chao (committee member), Clancy, Thomas Charles (committee member), Mili, Lamine M. (committee member).
Subjects/Keywords: Model-based Design; MBD; Software Synthesis; Formal techniques; Code generation; Formal analysis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Nanjundappa, M. (2015). Formal Techniques for Design and Development of Safety Critical Embedded Systems from Polychronous Models. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/73483
Chicago Manual of Style (16th Edition):
Nanjundappa, Mahesh. “Formal Techniques for Design and Development of Safety Critical Embedded Systems from Polychronous Models.” 2015. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/73483.
MLA Handbook (7th Edition):
Nanjundappa, Mahesh. “Formal Techniques for Design and Development of Safety Critical Embedded Systems from Polychronous Models.” 2015. Web. 26 Feb 2021.
Vancouver:
Nanjundappa M. Formal Techniques for Design and Development of Safety Critical Embedded Systems from Polychronous Models. [Internet] [Doctoral dissertation]. Virginia Tech; 2015. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/73483.
Council of Science Editors:
Nanjundappa M. Formal Techniques for Design and Development of Safety Critical Embedded Systems from Polychronous Models. [Doctoral Dissertation]. Virginia Tech; 2015. Available from: http://hdl.handle.net/10919/73483

Virginia Tech
21.
Li, Min.
Acceleration of Hardware Testing and Validation Algorithms using Graphics Processing Units.
Degree: PhD, Electrical and Computer Engineering, 2012, Virginia Tech
URL: http://hdl.handle.net/10919/29129
► With the advances of very large scale integration (VLSI) technology, the feature size has been shrinking steadily together with the increase in the design complexity…
(more)
▼ With the advances of very large scale integration (VLSI) technology, the feature size has been shrinking steadily together with the increase in the design complexity of logic circuits. As a result, the efforts taken for designing, testing, and debugging digital systems have increased tremendously. Although the electronic design automation (EDA) algorithms have been studied extensively to accelerate such processes, some computational intensive applications still take long execution times. This is especially the case for testing and validation. In order tomeet the time-to-market constraints and also to come up with a bug-free design or product, the work presented in this dissertation studies the acceleration of EDA algorithms on Graphics Processing Units (GPUs). This dissertation concentrates on a subset of EDA algorithms related to testing and validation. In particular, within the area of testing, fault simulation, diagnostic simulation and reliability analysis are explored. We also investigated the approaches to parallelize state justification on GPUs, which is one of the most difficult problems in the validation area.
Firstly, we present an efficient parallel fault simulator, FSimGP2, which exploits the high degree of parallelism supported by a state-of-the-art graphic processing unit (GPU) with the NVIDIA Compute Unified Device Architecture (CUDA). A novel three-dimensional parallel fault simulation technique is proposed to achieve extremely high computation efficiency on the GPU. The experimental results demonstrate a speedup of up to 4Ã compared to another GPU-based fault simulator.
Then, another GPU based simulator is used to tackle an even more computation-intensive task, diagnostic fault simulation. The simulator is based on a two-stage framework which exploits high computation efficiency on the GPU. We introduce a fault pair based approach to alleviate the limited memory capacity on GPUs. Also, multi-fault-signature and dynamic load balancing techniques are introduced for the best usage of computing resources on-board.
With continuously feature size scaling and advent of innovative nano-scale devices, the reliability analysis of the digital systems becomes more important nowadays. However, the computational cost to accurately analyze a large digital system is very high. We proposes an high performance reliability analysis tool on GPUs. To achieve highmemory bandwidth on GPUs, two algorithms for simulation scheduling and memory arrangement are proposed. Experimental results demonstrate that the parallel analysis tool is efficient, reliable and scalable.
In the area of design validation, we investigate state justification. By employing the swarm intelligence and the power of parallelism on GPUs, we are able to efficiently find a trace that could help us reach the corner cases during the validation of a digital system.
In summary, the work presented in this dissertation demonstrates that several applications in the area of digital design testing and validation can be successfully rearchitected to…
Advisors/Committee Members: Hsiao, Michael S. (committeechair), Schaumont, Patrick Robert (committee member), Shukla, Sandeep K. (committee member), Yang, Yaling (committee member), Fan, Weiguo Patrick (committee member).
Subjects/Keywords: general purpose computation on graphics processing; parallel algorithm; design validation; fault diagnosis; Fault simulation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, M. (2012). Acceleration of Hardware Testing and Validation Algorithms using Graphics Processing Units. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/29129
Chicago Manual of Style (16th Edition):
Li, Min. “Acceleration of Hardware Testing and Validation Algorithms using Graphics Processing Units.” 2012. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/29129.
MLA Handbook (7th Edition):
Li, Min. “Acceleration of Hardware Testing and Validation Algorithms using Graphics Processing Units.” 2012. Web. 26 Feb 2021.
Vancouver:
Li M. Acceleration of Hardware Testing and Validation Algorithms using Graphics Processing Units. [Internet] [Doctoral dissertation]. Virginia Tech; 2012. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/29129.
Council of Science Editors:
Li M. Acceleration of Hardware Testing and Validation Algorithms using Graphics Processing Units. [Doctoral Dissertation]. Virginia Tech; 2012. Available from: http://hdl.handle.net/10919/29129

Virginia Tech
22.
La Pan, Matthew Jonathan.
Security Issues for Modern Communications Systems: Fundamental Electronic Warfare Tactics for 4G Systems and Beyond.
Degree: PhD, Electrical Engineering, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/51042
► In the modern era of wireless communications, radios are becoming increasingly more cognitive. As the complexity and robustness of friendly communications increases, so do the…
(more)
▼ In the modern era of wireless communications, radios are becoming increasingly more cognitive. As the complexity and robustness of friendly communications increases, so do the abilities of adversarial jammers. The potential uses and threats of these jammers directly pertain to fourth generation (4G) communication standards, as well as future standards employing similar physical layer technologies.
This paper investigates a number of threats to the technologies utilized by 4G and future systems, as well as potential improvements to the security and robustness of these communications systems. The work undertaken highlights potential attacks at both the physical layer and the multiple access control (MAC) layer along with improvements to the technologies which they target.
This work presents a series of intelligent, targeted jamming attacks against the orthogonal frequency division multiplexing (OFDM) synchronization process to demonstrate some security flaws in existing 4G technology, as well as to highlight some of the potential tools of a cognitive electronic warfare attack device. Performance analysis of the OFDM synchronization process are demonstrated in the presence of the efficient attacks, where in many cases complete denial of service is induced.
A method for cross ambiguity function (CAF) based OFDM synchronization is presented as a security and mitigation tactic for 4G devices in the context of cognitive warfare scenarios. The method is shown to maintain comparable performance to other correlation based synchronization estimators while offering the benefit of a disguised preamble. Sync-amble randomization is also discussed as a combinatory strategy with CAF based OFDM synchronization to prevent cognitive jammers for tracking and targeting OFDM synchronization.
Finally, this work presents a method for dynamic spectrum access (DSA) enabled radio identification based solely on radio frequency (RF) observation. This method represents the framework for which both the cognitive jammer and anti-jam radio would perform cognitive sensing in order to utilize the intelligent physical layer attack and mitigation strategies previously discussed. The identification algorithm is shown to be theoretically effective in classifying and identifying two DSA radios with distinct operating policies.
Advisors/Committee Members: Clancy, Thomas Charles (committeechair), McGwier, Robert W. (committeechair), Reed, Jeffrey H. (committee member), Hancock, Kathleen L. (committee member), Shukla, Sandeep K. (committee member).
Subjects/Keywords: Wireless Communications
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
La Pan, M. J. (2014). Security Issues for Modern Communications Systems: Fundamental Electronic Warfare Tactics for 4G Systems and Beyond. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/51042
Chicago Manual of Style (16th Edition):
La Pan, Matthew Jonathan. “Security Issues for Modern Communications Systems: Fundamental Electronic Warfare Tactics for 4G Systems and Beyond.” 2014. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/51042.
MLA Handbook (7th Edition):
La Pan, Matthew Jonathan. “Security Issues for Modern Communications Systems: Fundamental Electronic Warfare Tactics for 4G Systems and Beyond.” 2014. Web. 26 Feb 2021.
Vancouver:
La Pan MJ. Security Issues for Modern Communications Systems: Fundamental Electronic Warfare Tactics for 4G Systems and Beyond. [Internet] [Doctoral dissertation]. Virginia Tech; 2014. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/51042.
Council of Science Editors:
La Pan MJ. Security Issues for Modern Communications Systems: Fundamental Electronic Warfare Tactics for 4G Systems and Beyond. [Doctoral Dissertation]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/51042

Virginia Tech
23.
Wang, Ting.
Wireless Network Physical Layer Security with Smart Antenna.
Degree: PhD, Computer Engineering, 2013, Virginia Tech
URL: http://hdl.handle.net/10919/23243
► Smart antenna technique has emerged as one of the leading technologies for enhancing the quality of service in wireless networks. Because of its ability to…
(more)
▼ Smart antenna technique has emerged as one of the leading technologies for enhancing the quality of service in wireless networks. Because of its ability to concentrate transmit power in desired directions, it has been widely adopted by academia and industry to achieve better coverage, improved capacity and spectrum efficiency of wireless communication systems. In spite of its popularity in applications of performance enhancement, the smart antenna\'s capability of improving wireless network security is relatively less explored. This dissertation focuses on exploiting the smart antenna technology to develop physical layer solutions to anti-eavesdropping and location security problems. We first investigate the problem of enhancing wireless communication privacy. A novel scheme named "artificial fading" is proposed, which leverages the beam switching capability of smart antennas to prevent eavesdropping attacks. We introduce the optimization strategy to design a pair of switched beam patterns that both have high directional gain to the intended receiver. Meanwhile, in all the other directions, the overlap between these two patterns is minimized. The transmitter switches between the two patterns at a high frequency. In this way, the signal to unintended directions experiences severe fading and the eavesdropper cannot decode it. We use simulation experiments to show that the artificial fading outperforms single pattern beamforming in reducing the unnecessary coverage area of the wireless transmitter. We then study the impact of beamforming technique on wireless localization systems from the perspectives of both location privacy protection and location spoofing attack. For the location privacy preservation scheme, we assume that the adversary uses received signal strength (RSS) based localization systems to localize network users in Wireless LAN (WLAN). The purpose of the scheme is to make the adversary unable to uniquely localize the user when possible, and otherwise, maximize error of the adversary\'s localization results. To this end, we design a two-step scheme to optimize the beamforming pattern of the wireless user\'s smart antenna. First, the user moves around to estimate the locations of surrounding access points (APs). Then based on the locations of the APs, pattern synthesis is optimized to minimize the number of APs in the coverage area and degenerate the localization precision. Simulation results show that our scheme can significantly lower the chance of being localized by adversaries and also degrade the location estimation precision to as low as the coverage range of the AP that the wireless user is connected to. As personal privacy preservation and security assurance at the system level are always conflictive to some extent, the capability of smart antenna to intentionally bias the RSS measurements of the localization system also potentially enables location spoofing attacks. From this aspect, we present theoretical analysis on the feasibility of…
Advisors/Committee Members: Yang, Yaling (committeechair), Yao, Danfeng (committee member), Shukla, Sandeep K. (committee member), Hou, Yiwei Thomas (committee member), Park, Jung-Min (committee member).
Subjects/Keywords: Wireless Network Security; Localization; Location privacy; Anti-eavesdropping; Smart Antenna; Beamforming; Location Spoofing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, T. (2013). Wireless Network Physical Layer Security with Smart Antenna. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/23243
Chicago Manual of Style (16th Edition):
Wang, Ting. “Wireless Network Physical Layer Security with Smart Antenna.” 2013. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/23243.
MLA Handbook (7th Edition):
Wang, Ting. “Wireless Network Physical Layer Security with Smart Antenna.” 2013. Web. 26 Feb 2021.
Vancouver:
Wang T. Wireless Network Physical Layer Security with Smart Antenna. [Internet] [Doctoral dissertation]. Virginia Tech; 2013. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/23243.
Council of Science Editors:
Wang T. Wireless Network Physical Layer Security with Smart Antenna. [Doctoral Dissertation]. Virginia Tech; 2013. Available from: http://hdl.handle.net/10919/23243
24.
Gao, Bo.
Coexistence of Wireless Networks for Shared Spectrum Access.
Degree: PhD, Computer Engineering, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/50525
► The radio frequency spectrum is not being efficiently utilized partly due to the current policy of allocating the frequency bands to specific services and users.…
(more)
▼ The radio frequency spectrum is not being efficiently utilized partly due to the current policy of allocating the frequency bands to specific services and users. In opportunistic spectrum access (OSA), the ``white spaces'' that are not occupied by primary users (a.
k.a. incumbent users) can be opportunistically utilized by secondary users. To achieve this, we need to solve two problems: (i) primary-secondary incumbent protection, i.e., prevention of harmful interference from secondary users to primary users; (ii) secondary-secondary network coexistence, i.e., mitigation of mutual interference among secondary users. The first problem has been addressed by spectrum sensing techniques in cognitive radio (CR) networks and geolocation database services in database-driven spectrum sharing. The second problem is the main focus of this dissertation. To obtain a clear picture of coexistence issues, we propose a taxonomy of heterogeneous coexistence mechanisms for shared spectrum access. Based on the taxonomy, we choose to focus on four typical coexistence scenarios in this dissertation.
Firstly, we study sensing-based OSA, when secondary users are capable of employing the channel aggregation technique. However, channel aggregation is not always beneficial due to dynamic spectrum availability and limited radio capability. We propose a channel usage model to analyze the impact of both primary and secondary user behaviors on the efficiency of channel aggregation. Our simulation results show that user demands in both the frequency and time domains should be carefully chosen to minimize expected cumulative delay.
Secondly, we study the coexistence of homogeneous CR networks, termed as self-coexistence, when co-channel networks do not rely on inter-network coordination. We propose an uplink soft frequency reuse technique to enable globally power-efficient and locally fair spectrum sharing. We frame the self-coexistence problem as a non-cooperative game, and design a local heuristic algorithm that achieves the Nash equilibrium in a distributed manner. Our simulation results show that the proposed technique is mostly near-optimal and improves self-coexistence in spectrum utilization, power consumption, and intra-cell fairness.
Thirdly, we study the coexistence of heterogeneous CR networks, when co-channel networks use different air interface standards. We propose a credit-token-based spectrum etiquette framework that enables spectrum sharing via inter-network coordination. Specifically, we propose a game-auction coexistence framework, and prove that the framework is stable. Our simulation results show that the proposed framework always converges to a near-optimal distributed solution and improves coexistence fairness and spectrum utilization.
Fourthly, we study database-driven OSA, when secondary users are mobile. The use of geolocation databases is inadequate in supporting location-aided spectrum sharing if the users are mobile. We propose a probabilistic coexistence framework that supports mobile users by locally adapting their…
Advisors/Committee Members: Park, Jung-Min Jerry (committeechair), Yang, Yaling (committeechair), Yao, Danfeng (committee member), Hou, Yiwei Thomas (committee member), Shukla, Sandeep K. (committee member).
Subjects/Keywords: Opportunistic Spectrum Access; Cognitive Radio Networks; White Space Networks; Shared Spectrum Access; Spectrum Sharing; Network Coexistence
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gao, B. (2014). Coexistence of Wireless Networks for Shared Spectrum Access. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/50525
Chicago Manual of Style (16th Edition):
Gao, Bo. “Coexistence of Wireless Networks for Shared Spectrum Access.” 2014. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/50525.
MLA Handbook (7th Edition):
Gao, Bo. “Coexistence of Wireless Networks for Shared Spectrum Access.” 2014. Web. 26 Feb 2021.
Vancouver:
Gao B. Coexistence of Wireless Networks for Shared Spectrum Access. [Internet] [Doctoral dissertation]. Virginia Tech; 2014. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/50525.
Council of Science Editors:
Gao B. Coexistence of Wireless Networks for Shared Spectrum Access. [Doctoral Dissertation]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/50525

Virginia Tech
25.
Lerner, Lee Wilmoth.
Trustworthy Embedded Computing for Cyber-Physical Control.
Degree: PhD, Computer Engineering, 2015, Virginia Tech
URL: http://hdl.handle.net/10919/51545
► A cyber-physical controller (CPC) uses computing to control a physical process. Example CPCs can be found in self-driving automobiles, unmanned aerial vehicles, and other autonomous…
(more)
▼ A cyber-physical controller (CPC) uses computing to control a physical process. Example CPCs can be found in self-driving automobiles, unmanned aerial vehicles, and other autonomous systems. They are also used in large-scale industrial control systems (ICSs) manufacturing and utility infrastructure. CPC operations rely on embedded systems having real-time, high-assurance interactions with physical processes. However, recent attacks like Stuxnet have demonstrated that CPC malware is not restricted to networks and general-purpose computers, rather embedded components are targeted as well. General-purpose computing and network approaches to security are failing to protect embedded controllers, which can have the direct effect of process disturbance or destruction. Moreover, as embedded systems increasingly grow in capability and find application in CPCs, embedded leaf node security is gaining priority.
This work develops a root-of-trust design architecture, which provides process resilience to cyber attacks on, or from, embedded controllers: the Trustworthy Autonomic Interface Guardian Architecture (TAIGA). We define five trust requirements for building a fine-grained trusted computing component. TAIGA satisfies all requirements and addresses all classes of CPC attacks using an approach distinguished by adding resilience to the embedded controller, rather than seeking to prevent attacks from ever reaching the controller. TAIGA provides an on-chip, digital, security version of classic mechanical interlocks. This last line of defense monitors all of the communications of a controller using configurable or external hardware that is inaccessible to the controller processor. The interface controller is synthesized from C code, formally analyzed, and permits run-time checked, authenticated updates to certain system parameters but not code. TAIGA overrides any controller actions that are inconsistent with system specifications, including prediction and preemption of latent malwares attempts to disrupt system stability and safety.
This material is based upon work supported by the National Science Foundation under Grant Number CNS-1222656. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We are grateful for donations from Xilinx, Inc. and support from the Georgia
Tech Research Institute.
Advisors/Committee Members: Patterson, Cameron D. (committeechair), Shukla, Sandeep K. (committee member), Yao, Danfeng (committee member), Schaumont, Patrick Robert (committee member), Park, Jung-Min (committee member).
Subjects/Keywords: Trustworthy Computing; Secure Computing; Autonomic Computing; Cybersecurity; Embedded Systems; Cyber-Physical Systems; Process Control Systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lerner, L. W. (2015). Trustworthy Embedded Computing for Cyber-Physical Control. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/51545
Chicago Manual of Style (16th Edition):
Lerner, Lee Wilmoth. “Trustworthy Embedded Computing for Cyber-Physical Control.” 2015. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/51545.
MLA Handbook (7th Edition):
Lerner, Lee Wilmoth. “Trustworthy Embedded Computing for Cyber-Physical Control.” 2015. Web. 26 Feb 2021.
Vancouver:
Lerner LW. Trustworthy Embedded Computing for Cyber-Physical Control. [Internet] [Doctoral dissertation]. Virginia Tech; 2015. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/51545.
Council of Science Editors:
Lerner LW. Trustworthy Embedded Computing for Cyber-Physical Control. [Doctoral Dissertation]. Virginia Tech; 2015. Available from: http://hdl.handle.net/10919/51545

Virginia Tech
26.
Lin, Hua.
Communication Infrastructure for the Smart Grid: A Co-Simulation Based Study on Techniques to Improve the Power Transmission System Functions with Efficient Data Networks.
Degree: PhD, Electrical and Computer Engineering, 2012, Virginia Tech
URL: http://hdl.handle.net/10919/29248
► The vision of the smart grid is predicated upon pervasive use of modern digital communication techniques in today's power system. As wide area measurements and…
(more)
▼ The vision of the smart grid is predicated upon pervasive use of modern digital communication techniques in today's power system. As wide area measurements and control techniques are being developed and deployed for a more resilient power system, the role of communication networks is becoming prominent. Advanced communication infrastructure provides much wider system observability and enables globally optimal control schemes. Wide area measurement and monitoring with Phasor Measurement Units (PMUs) or Intelligent Electronic Devices (IED) is a growing trend in this context. However, the large amount of data collected by PMUs or IEDs needs to be transferred over the data network to control centers where real-time state estimation, protection, and control decisions are made. The volume and frequency of such data transfers, and real-time delivery requirements mandate that sufficient bandwidth and proper delay characteristics must be ensured for the correct operations. Power system dynamics get influenced by the underlying communication infrastructure. Therefore, extensive integration of power system and communication infrastructure mandates that the two systems be studied as a single distributed cyber-physical system.
This dissertation proposes a global event-driven co-simulation framework, which is termed as GECO, for interconnected power system and communication network. GECO can be used as a design pattern for hybrid system simulation with continuous/discrete sub-components. An implementation of GECO is achieved by integrating two software packages: PSLF and NS2 into the framework. Besides, this dissertation proposes and studies a set of power system applications which can be only properly evaluated on a co-simulation framework like GECO, namely communication-based distance relay protection, all-PMU state estimation and PMU-based out-of-step protection. All of them take advantage of interplays between the power grid and the communication infrastructure. The GECO experiments described in this dissertation not only show the efficacy of the GECO framework, but also provide experience on how to go about using GECO in smart grid planning activities.
Advisors/Committee Members: Shukla, Sandeep K. (committeechair), Wernz, Christian (committee member), Yang, Yaling (committee member), Mili, Lamine M. (committee member), Abbott, A. Lynn (committee member).
Subjects/Keywords: Co-Simulation; Wide Area Measurement System; Smart Grid; Remote Backup Relay; All-PMU State Estimation; Out-of-Step Protection
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lin, H. (2012). Communication Infrastructure for the Smart Grid: A Co-Simulation Based Study on Techniques to Improve the Power Transmission System Functions with Efficient Data Networks. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/29248
Chicago Manual of Style (16th Edition):
Lin, Hua. “Communication Infrastructure for the Smart Grid: A Co-Simulation Based Study on Techniques to Improve the Power Transmission System Functions with Efficient Data Networks.” 2012. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/29248.
MLA Handbook (7th Edition):
Lin, Hua. “Communication Infrastructure for the Smart Grid: A Co-Simulation Based Study on Techniques to Improve the Power Transmission System Functions with Efficient Data Networks.” 2012. Web. 26 Feb 2021.
Vancouver:
Lin H. Communication Infrastructure for the Smart Grid: A Co-Simulation Based Study on Techniques to Improve the Power Transmission System Functions with Efficient Data Networks. [Internet] [Doctoral dissertation]. Virginia Tech; 2012. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/29248.
Council of Science Editors:
Lin H. Communication Infrastructure for the Smart Grid: A Co-Simulation Based Study on Techniques to Improve the Power Transmission System Functions with Efficient Data Networks. [Doctoral Dissertation]. Virginia Tech; 2012. Available from: http://hdl.handle.net/10919/29248

Virginia Tech
27.
Maiti, Abhranil.
A Systematic Approach to Design an Efficient Physical Unclonable Function.
Degree: PhD, Electrical and Computer Engineering, 2012, Virginia Tech
URL: http://hdl.handle.net/10919/51257
► A Physical Unclonable Function (PUF) has shown a lot of promise to solve many security issues due to its ability to generate a random yet…
(more)
▼ A Physical Unclonable Function (PUF) has shown a lot of promise to solve many security issues due to its ability to generate a random yet chip-unique secret in the form of an identifier or a key while resisting cloning attempts as well as physical tampering. It is a hardware-based challenge-response function which maps its responses to its challenges exploiting complex statistical variation in the logic and interconnect inside integrated circuits (ICs). An efficient PUF should generate a key that varies randomly from one chip to another. At the same time, it should reliably reproduce a key from a chip every time the key is requested from that chip. Moreover, a PUF should be robust to thwart any attack that aims to reveal its key. Designing an efficient PUF having all these qualities with a low cost is challenging. Furthermore, the efficiency of a PUF needs to be validated by characterizing it over a group of chips. This is because a PUF circuit is supposed to be instantiated in several chips, and whether it can produce a chip-unique identifier/key or not cannot be validated using a single chip. The main goal of this research is to propose a systematic approach to build a random, reliable, and robust PUF incurring minimal cost.
With this objective, we first formulate a novel PUF system model that uncouples PUF measurement from PUF identifier formation. The proposed model divides PUF operation into three separate but related components. We show that the three PUF quality factors, randomness, reliability, and robustness, can be improved at each component of the system model resulting in an overall improvement of a PUF. We proposed three PUF enhancement techniques using the system model in this research. The proposed techniques showed significant improvements in a PUF.
Second, we present a large-scale PUF characterization method to validate the efficiency of a PUF as a secure primitive. A compact and portable method measured a sizable set of around 200 chips. We also performed experiments to test a PUF against variations in operating conditions (temperature, supply voltage) and circuit aging.
Third, we propose a method that can evaluate and compare the performance of different PUFs irrespective of their underlying working principles. This method can help a designer to select a PUF that is the most suitable one for a particular application. Finally, a novel PUF that exploits the variability in the pipeline of a microprocessor is presented. This PUF has a very low area cost while it can be easily integrated using software programs in an application having a microprocessor.
Advisors/Committee Members: Schaumont, Patrick Robert (committeechair), Kim, Inyoung (committee member), Nazhandali, Leyla (committee member), Shukla, Sandeep K. (committee member), Tront, Joseph G. (committee member).
Subjects/Keywords: Process variation; Physical Unclonable Function; Security
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Maiti, A. (2012). A Systematic Approach to Design an Efficient Physical Unclonable Function. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/51257
Chicago Manual of Style (16th Edition):
Maiti, Abhranil. “A Systematic Approach to Design an Efficient Physical Unclonable Function.” 2012. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/51257.
MLA Handbook (7th Edition):
Maiti, Abhranil. “A Systematic Approach to Design an Efficient Physical Unclonable Function.” 2012. Web. 26 Feb 2021.
Vancouver:
Maiti A. A Systematic Approach to Design an Efficient Physical Unclonable Function. [Internet] [Doctoral dissertation]. Virginia Tech; 2012. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/51257.
Council of Science Editors:
Maiti A. A Systematic Approach to Design an Efficient Physical Unclonable Function. [Doctoral Dissertation]. Virginia Tech; 2012. Available from: http://hdl.handle.net/10919/51257
28.
Ganta, Dinesh.
An Effort toward Building more Secure and Efficient Physical Unclonable Functions.
Degree: PhD, Computer Engineering, 2015, Virginia Tech
URL: http://hdl.handle.net/10919/51217
► Over the last decade, there has been a tremendous growth in the number of electronic devices and applications. One of the very important aspects to…
(more)
▼ Over the last decade, there has been a tremendous growth in the number of electronic devices and applications. One of the very important aspects to deal with such proliferation of ICs is their security. Establishing the Identity (ID) of a device is the cornerstone of any secure application. Typically, the IDs of devices are stored in non-volatile memories (NVM) or through burning fuses on ICs. However, through such traditional techniques, IDs are vulnerable to attacks. Further, maintaining such secrets in NVMs is expensive.
Physical Unclonable Functions (PUF) provide an alternative method for creating chip IDs. They exploit the uncontrollable variations that exist in IC manufacturing to generate identifiers. However, since PUFs exploit the small mismatch across identically designed circuits, the responses of PUFs are prone to error in the presence of unwanted variations in the operating temperature, supply voltage, and other noises. The overarching goal of this work is to develop silicon PUFs that are highly efficient and stable to such noises. In addition, to make PUFs more attractive for low cost and tiny embedded systems, our goal is to develop PUFs with minimal area and power consumption for a given ID length and security requirement.
Techniques to develop such PUFs span different abstraction levels ranging from technology-independent application-level techniques to technology-dependent device-level ones. In this dissertation, we present different technology-independent and technology-dependent techniques and evaluate which techniques are good candidates for improving different qualities of PUFs.
In technology-independent techniques, we propose two modifications to a conventional PUF architecture, which are detailed in this thesis. Both modifications result in a PUF that is more efficient in terms of area and power. Compared to the traditional architecture, for a given silicon real estate, the proposed architecture provides over two orders of magnitude larger C/R space and it has higher resistance toward modeling attacks.
Under technology-dependent methods, we investigate multiple techniques that improve stability and efficiency of PUF designs. In one approach, we propose a novel PUF design with a similar architecture to that of a traditional design, where we replace large and power hungry digital components with more efficient analog components. In another technique, we exploit the differences between pMOS and nMOS transistors in their variation of threshold voltage (Vth) and in the temperature coefficients of Vth to significantly improve the stability of bi-stable PUFs. We also use circuit-level simulations to evaluate the stability of silicon PUFs to aging degradation.
We believe that our technology-independent techniques are good candidates for improving overall efficiency of PUFs in terms of both operation and implementation costs, suitable for PUFs with tight constraints on cost for design and test. However, with regards to improving the stability of PUFs, it is cost-effective to use our…
Advisors/Committee Members: Nazhandali, Leyla (committeechair), Tehranipoor, Mohammad (committee member), Schaumont, Patrick Robert (committee member), Shukla, Sandeep K. (committee member), Kim, Inyoung (committee member).
Subjects/Keywords: Physical Unclonable Functions; Security; Fingerprinting; Integrated Circuits; Identifiers
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ganta, D. (2015). An Effort toward Building more Secure and Efficient Physical Unclonable Functions. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/51217
Chicago Manual of Style (16th Edition):
Ganta, Dinesh. “An Effort toward Building more Secure and Efficient Physical Unclonable Functions.” 2015. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/51217.
MLA Handbook (7th Edition):
Ganta, Dinesh. “An Effort toward Building more Secure and Efficient Physical Unclonable Functions.” 2015. Web. 26 Feb 2021.
Vancouver:
Ganta D. An Effort toward Building more Secure and Efficient Physical Unclonable Functions. [Internet] [Doctoral dissertation]. Virginia Tech; 2015. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/51217.
Council of Science Editors:
Ganta D. An Effort toward Building more Secure and Efficient Physical Unclonable Functions. [Doctoral Dissertation]. Virginia Tech; 2015. Available from: http://hdl.handle.net/10919/51217
29.
Anderson, Matthew Eric.
APECS: A Polychrony based End-to-End Embedded System Design and Code Synthesis.
Degree: PhD, Computer Engineering, 2015, Virginia Tech
URL: http://hdl.handle.net/10919/52369
► The development of high integrity embedded systems remains an arduous and error-prone task, despite the efforts by researchers in inventing tools and techniques for design…
(more)
▼ The development of high integrity embedded systems remains an arduous and error-prone task, despite the efforts by researchers in inventing tools and techniques for design automation. Much of the problem arises from the fact that the semantics of the modeling languages for the various tools, are often distinct, and the semantics gaps are often filled manually through the engineer's understanding of one model or an abstraction. This provides an opportunity for bugs to creep in, other than standardizing software engineering errors germane to such complex system engineering. Since embedded systems applications such as avionics, automotive, or industrial automation are safety critical, it is very important to invent tools, and methodologies for safe and reliable system design. Much of the tools, and techniques deal with either the design of embedded platforms (hardware, networking, firmware etc), and software stack separately. The problem of the semantic gap between these two, as well as between models of computation used to capture semantics must be solved in order to design safer embedded systems.
In this dissertation we propose a methodology for the end-to-end modeling and analysis of safety-critical embedded systems. Our approach consists of formal platform modeling, and analysis; formal application modeling; and 'correct-by-construction' code synthesis with the aim of bridging semantic gaps between the various abstractions and models required for the end-to-end system design. While the platform modeling language AADL has formal semantics, and analysis tools for real-time, and performance verification, the application behavior modeling in AADL is weak and part of an annex. In our work, we create the APECS (AADL and Polychrony based Embedded Computing Synthesis) methodology to allow an embedded system design specification all the way from platform architecture and platform components, the real-time behavior, non-functional properties, as well as the application software modeling. Our main contribution is to integrate a polychronous application software modeling language, and synthesis algorithms in order for synthesis of the embedded software running on the target platform, with the required constraints being met. We believe that a polychronous approach is particularly well suited for a multiprocessor/multi-controller distributed platform where different components often operate at independent rates and concurrently. Further, the use of a formal polychronous language will allow for formal validation of the software prior to code generation. We present a prototype framework that implements this approach, which we refer to as the AADL and Polychrony based Embedded Computing System (APECS). Our prototype utilizes an extended version of Ocarina to provide code generation for the AADL model. Our polychronous modeling language is MRICDF. Our prototype extends Ocarina to support software specification in MRICDF and generate multi-threaded software. Additionally, we implement an automated translation from Simulink to…
Advisors/Committee Members: Shukla, Sandeep K. (committeechair), Haghighat, Alireza (committee member), Wang, Chao (committee member), Deng, Yi (committee member), Mili, Lamine M. (committee member).
Subjects/Keywords: AADL; CPS; Model-based code synthesis; correct-by-construction code synthesis; Polychrony; code generators; OSATE; Ocarina
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Anderson, M. E. (2015). APECS: A Polychrony based End-to-End Embedded System Design and Code Synthesis. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/52369
Chicago Manual of Style (16th Edition):
Anderson, Matthew Eric. “APECS: A Polychrony based End-to-End Embedded System Design and Code Synthesis.” 2015. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/52369.
MLA Handbook (7th Edition):
Anderson, Matthew Eric. “APECS: A Polychrony based End-to-End Embedded System Design and Code Synthesis.” 2015. Web. 26 Feb 2021.
Vancouver:
Anderson ME. APECS: A Polychrony based End-to-End Embedded System Design and Code Synthesis. [Internet] [Doctoral dissertation]. Virginia Tech; 2015. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/52369.
Council of Science Editors:
Anderson ME. APECS: A Polychrony based End-to-End Embedded System Design and Code Synthesis. [Doctoral Dissertation]. Virginia Tech; 2015. Available from: http://hdl.handle.net/10919/52369

Virginia Tech
30.
Short, Nathaniel Jackson.
Robust Feature Extraction and Temporal Analysis for Partial Fingerprint Identification.
Degree: PhD, Electrical and Computer Engineering, 2012, Virginia Tech
URL: http://hdl.handle.net/10919/29033
► Identification of an individual from discriminating features of the friction ridge surface is one of the oldest and most commonly used biometric techniques. Methods for…
(more)
▼ Identification of an individual from discriminating features of the friction ridge surface is one of the oldest and most commonly used biometric techniques. Methods for identification span from tedious, although highly accurate, manual examination to much faster Automated Fingerprint Identification Systems (AFIS). While automatic fingerprint recognition has grown in popularity due to the speed and accuracy of matching minutia features of good quality plain-to-rolled prints, the performance is less than impressive when matching partial fingerprints. For some applications, including forensic analysis where partial prints come in the form of latent prints, it is not always possible to obtain high-quality image samples. Latent prints, which are lifted from a surface, are typically of low quality and low fingerprint surface area. As a result, the overlapping region in which to find corresponding features in the genuine matching ten-print is reduced; this in turn reduces the identification performance. Image quality also can vary substantially during image capture in applications with a high throughput of subjects having limited training, such as in border control. The rushed image capture leads to an overall acceptable sample being obtained where local image region quality may be low.
We propose an improvement to the reliability of features detected in exemplar prints in order to reduce the likelihood of an unreliable overlapping region corresponding with a genuine partial print. A novel approach is proposed for detecting minutiae in low quality image regions. The approach has demonstrated an increase in match performance for a set of fingerprints from a well-known database. While the method is effective at improving match performance for all of the fingerprint images in the database, a more significant improvement is observed for a subset of low quality images.
In addition, a novel method for fingerprint analysis using a sequence of fingerprint images is proposed. The approach uses the sequence of images to extract and track minutiae for temporal analysis during a single impression, reducing the variation in image quality during image capture. Instead of choosing a single acceptable image from the sequence based on a global measure, we examine the change in quality on a local level and stitch blocks from multiple images based on the optimal local quality measures.
Advisors/Committee Members: Abbott, A. Lynn (committeechair), Xuan, Jianhua Jason (committee member), Shukla, Sandeep K. (committee member), Hsiao, Michael S. (committee member), Fox, Edward A. (committee member).
Subjects/Keywords: Bayesian Estimation; Fingerprint; Biometrics; Extended Features; Temporal Analysis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Short, N. J. (2012). Robust Feature Extraction and Temporal Analysis for Partial Fingerprint Identification. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/29033
Chicago Manual of Style (16th Edition):
Short, Nathaniel Jackson. “Robust Feature Extraction and Temporal Analysis for Partial Fingerprint Identification.” 2012. Doctoral Dissertation, Virginia Tech. Accessed February 26, 2021.
http://hdl.handle.net/10919/29033.
MLA Handbook (7th Edition):
Short, Nathaniel Jackson. “Robust Feature Extraction and Temporal Analysis for Partial Fingerprint Identification.” 2012. Web. 26 Feb 2021.
Vancouver:
Short NJ. Robust Feature Extraction and Temporal Analysis for Partial Fingerprint Identification. [Internet] [Doctoral dissertation]. Virginia Tech; 2012. [cited 2021 Feb 26].
Available from: http://hdl.handle.net/10919/29033.
Council of Science Editors:
Short NJ. Robust Feature Extraction and Temporal Analysis for Partial Fingerprint Identification. [Doctoral Dissertation]. Virginia Tech; 2012. Available from: http://hdl.handle.net/10919/29033
◁ [1] [2] [3] ▶
.