You searched for +publisher:"Texas A&M University" +contributor:("Bettati, Riccardo")
.
Showing records 1 – 30 of
113 total matches.
◁ [1] [2] [3] [4] ▶

Texas A&M University
1.
Lin, Victor Hamilton.
PULS on WARP Platform - Detailed Investigation of Real-Time Scheduling Performance.
Degree: MS, Computer Engineering, 2018, Texas A&M University
URL: http://hdl.handle.net/1969.1/174375
► Software-defined Radio (SDR) platforms are popular tools to implement custom wireless network algorithms and architectures designs now-days. One of the most popular SDR platforms being…
(more)
▼ Software-defined Radio (SDR) platforms are popular tools to implement custom wireless network
algorithms and architectures designs now-days. One of the most popular SDR platforms
being used is the National Instrument USRP. In addition to having powerful hardware for supporting
various physical layer (PHY) protocols, the software is no less significant for its flexibility to
implement custom MAC layer algorithms.
When presenting various Wi-Fi experimental results based on the platform, often time we are
asked "What about WARP"? WARP is a wireless development platform developed by Mango
Communications, integrating a high performance field programmable grid array (FPGA) from
Xilinx, two flexible RF interfaces, and multiple peripherals to facilitate rapid prototyping of custom
wireless designs. In the past, we’ve been focusing only on the USRP for prototyping, but WARP
could be a potential candidate to outperform USRP under certain requirements.
PULS[1] presented a new scheduling experiment that executed on the USRP with realistic
packet arrival characteristics. The goal of this thesis is to perform a comprehensive comparison
between the WARP and USRP platforms based on the PULSE architecture. On the other hand, we
want to investigate the advantages and disadvantages of WARP compared to USRP under various
requirements. From the experimental result, we see that PULS can be successfully implemented
on the WARP platform, the throughput performance of PULS on the WARP platform is 146% of
the USRP platform.
Advisors/Committee Members: Hou, I-Hong (advisor), Shakkottai, Srinivas (advisor), Bettati, Riccardo (committee member).
Subjects/Keywords: MAC Scheduling; Ultra-low latency; Software Defined Radio; WARP
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lin, V. H. (2018). PULS on WARP Platform - Detailed Investigation of Real-Time Scheduling Performance. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/174375
Chicago Manual of Style (16th Edition):
Lin, Victor Hamilton. “PULS on WARP Platform - Detailed Investigation of Real-Time Scheduling Performance.” 2018. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/174375.
MLA Handbook (7th Edition):
Lin, Victor Hamilton. “PULS on WARP Platform - Detailed Investigation of Real-Time Scheduling Performance.” 2018. Web. 21 Apr 2021.
Vancouver:
Lin VH. PULS on WARP Platform - Detailed Investigation of Real-Time Scheduling Performance. [Internet] [Masters thesis]. Texas A&M University; 2018. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/174375.
Council of Science Editors:
Lin VH. PULS on WARP Platform - Detailed Investigation of Real-Time Scheduling Performance. [Masters Thesis]. Texas A&M University; 2018. Available from: http://hdl.handle.net/1969.1/174375

Texas A&M University
2.
Mohanty, Saswat.
Using Secure Real-time Padding Protocol to Secure Voice-over-IP from Traffic Analysis Attacks.
Degree: MS, Computer Science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9116
► Voice Over IP (VoIP) systems and transmission technologies have now become the norm for many communications applications. However, whether they are used for personal communication…
(more)
▼ Voice Over IP (VoIP) systems and transmission technologies have now become the norm for many communications applications. However, whether they are used for personal communication or priority business conferences and talks, privacy and confidentiality of the communication is of utmost priority. The present industry standard is to encrypt VoIP calls using Secure Real-time Transport Protocol (SRTP), aided by ZRTP, but this methodology remains vulnerable to traffic analysis attacks, some of which utilize the length of the encrypted packets to infer the language and spoken phrases of the conversation.
Secure Real-time Padding Protocol (SRPP) is a new RTP profile which pads all VoIP sessions in a unique way to thwart traffic analysis attacks on encrypted calls. It pads every RTP or SRTP packet to a predefined packet size, adds dummy packets at the end of every burst in a controllable way, adds dummy bursts to hide silence spurts, and hides information about the packet inter-arrival timings. This thesis discusses a few practical approaches and a theoretical optimization approach to packet size padding. SRPP has been implemented in the form of a library, libSRPP, for VoIP application developers and as an application, SQRKal, for regular users. SQRKal also serves as an extensive platform for implementation and verification of new packet padding techniques.
Advisors/Committee Members: Bettati, Riccardo (advisor), Loguinov, Dmitri (committee member), Annapareddy, Narasimha (committee member).
Subjects/Keywords: Traffic Analysis; Padding; SRPP; VoIP; Security; Privacy and Anonymity
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mohanty, S. (2012). Using Secure Real-time Padding Protocol to Secure Voice-over-IP from Traffic Analysis Attacks. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9116
Chicago Manual of Style (16th Edition):
Mohanty, Saswat. “Using Secure Real-time Padding Protocol to Secure Voice-over-IP from Traffic Analysis Attacks.” 2012. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9116.
MLA Handbook (7th Edition):
Mohanty, Saswat. “Using Secure Real-time Padding Protocol to Secure Voice-over-IP from Traffic Analysis Attacks.” 2012. Web. 21 Apr 2021.
Vancouver:
Mohanty S. Using Secure Real-time Padding Protocol to Secure Voice-over-IP from Traffic Analysis Attacks. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9116.
Council of Science Editors:
Mohanty S. Using Secure Real-time Padding Protocol to Secure Voice-over-IP from Traffic Analysis Attacks. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9116

Texas A&M University
3.
Zhuo, Yue.
IRLstack 3.0: High-Performance Windows Sockets.
Degree: MS, Computer Science, 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/154001
► With the ever-growing volume and speed of Internet traffic, network applications place higher demand on packet I/O rates. Although 1-Gbps and even 10-Gbps Ethernet are…
(more)
▼ With the ever-growing volume and speed of Internet traffic, network applications place higher demand on packet I/O rates. Although 1-Gbps and even 10-Gbps Ethernet are widely adopted, achieving wire rate with small packets remains hindered by bottlenecks inside the TCP/IP stack. Improvements have been made for Linux, but there is still limited work in Windows. To bridge this gap, we build a new generation of our network driver IRLstack and show that it can achieve 10 Gbps wire rate (i.e. 14.88 Mpps), for both send and receive, with zero CPU utilization. This compares favorably to the fastest Linux versions, which typically saturate one or more CPU cores and often fail to achieve this rate in both directions.
Advisors/Committee Members: Loguinov, Dmitri (advisor), Bettati, Riccardo (committee member), Reddy, Narasimha (committee member).
Subjects/Keywords: network driver; socket; windows; high-performance
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhuo, Y. (2014). IRLstack 3.0: High-Performance Windows Sockets. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/154001
Chicago Manual of Style (16th Edition):
Zhuo, Yue. “IRLstack 3.0: High-Performance Windows Sockets.” 2014. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/154001.
MLA Handbook (7th Edition):
Zhuo, Yue. “IRLstack 3.0: High-Performance Windows Sockets.” 2014. Web. 21 Apr 2021.
Vancouver:
Zhuo Y. IRLstack 3.0: High-Performance Windows Sockets. [Internet] [Masters thesis]. Texas A&M University; 2014. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/154001.
Council of Science Editors:
Zhuo Y. IRLstack 3.0: High-Performance Windows Sockets. [Masters Thesis]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/154001

Texas A&M University
4.
Nix, Timothy Glen.
Covert Communication Networks.
Degree: PhD, Computer Science, 2013, Texas A&M University
URL: http://hdl.handle.net/1969.1/151304
► A covert communications network (CCN) is a connected, overlay peer-to-peer network used to support communications within a group in which the survival of the group…
(more)
▼ A covert communications network (CCN) is a connected, overlay peer-to-peer network used to support communications within a group in which the survival of the group depends on the confidentiality and anonymity of communications, on concealment of participation in the network to both other members of the group and external eavesdroppers, and finally on resilience against disconnection. In this dissertation, we describe the challenges and requirements for such a system. We consider the topologies of resilient covert communications networks that: (1) minimize the impact on the network in the event of a subverted node; and (2) maximize the connectivity of the survivor network with the removal of the subverted node and its closed neighborhood. We analyze the properties of resilient covert networks, propose measurements for determining the suitability of a topology for use in a covert communication network, and determine the properties of an optimal covert network topology. We analyze multiple topologies and identify two constructions that are capable of generating optimal topologies. We then extend these constructions to produce near-optimal topologies that can “grow” as new nodes join the network. We also address protocols for membership management and routing. Finally, we describe the architecture of a prototype system for instantiating a CCN.
Advisors/Committee Members: Bettati, Riccardo (advisor), Liu, Jyh-Charn (committee member), Klappenecker, Andreas (committee member), Rogers, Jonathan (committee member).
Subjects/Keywords: anonymity; anonymous communication; communications networks; covert communication; membership concealment; overlay networks; peer-to-peer; privacy; security
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Nix, T. G. (2013). Covert Communication Networks. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/151304
Chicago Manual of Style (16th Edition):
Nix, Timothy Glen. “Covert Communication Networks.” 2013. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/151304.
MLA Handbook (7th Edition):
Nix, Timothy Glen. “Covert Communication Networks.” 2013. Web. 21 Apr 2021.
Vancouver:
Nix TG. Covert Communication Networks. [Internet] [Doctoral dissertation]. Texas A&M University; 2013. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/151304.
Council of Science Editors:
Nix TG. Covert Communication Networks. [Doctoral Dissertation]. Texas A&M University; 2013. Available from: http://hdl.handle.net/1969.1/151304

Texas A&M University
5.
Yang, Xu.
New Image Processing Methods for Ultrasound Musculoskeletal Applications.
Degree: PhD, Electrical Engineering, 2018, Texas A&M University
URL: http://hdl.handle.net/1969.1/173591
► In the past few years, ultrasound (US) imaging modalities have received increasing interest as diagnostic tools for orthopedic applications. The goal for many of these…
(more)
▼ In the past few years, ultrasound (US) imaging modalities have received increasing interest as diagnostic tools for orthopedic applications. The goal for many of these novel ultrasonic methods is to be able to create three-dimensional (3D) bone visualization non-invasively, safely and with high accuracy and spatial resolution. Availability of accurate bone segmentation and 3D reconstruction methods would help correctly interpreting complex bone morphology as well as facilitate quantitative analysis. However, in vivo ultrasound images of bones may have poor quality due to uncontrollable motion, high ultrasonic attenuation and the presence of imaging artifacts, which can affect the quality of the bone segmentation and reconstruction results.
In this study, we investigate the use of novel ultrasonic processing methods that can significantly improve bone visualization, segmentation and 3D reconstruction in ultrasound volumetric data acquired in applications in vivo. Specifically, in this study, we investigate the use of new elastography-based, Doppler-based and statistical shape model-based methods that can be applied to ultrasound bone imaging applications with the overall major goal of obtaining fast yet accurate 3D bone reconstructions. This study is composed to three projects, which all have the potential to significantly contribute to this major goal.
The first project deals with the fast and accurate implementation of correlation-based elastography and poroelastography techniques for real-time assessment of the mechanical properties of musculoskeletal tissues. The rationale behind this project is that,
iii
in the future, elastography-based features can be used to reduce false positives in ultrasonic bone segmentation methods based on the differences between the mechanical properties of soft tissues and the mechanical properties of hard tissues. In this study, a hybrid computation model is designed, implemented and tested to achieve real time performance without compromise in elastographic image quality .
In the second project, a Power Doppler-based signal enhancement method is designed and tested with the intent of increasing the contrast between soft tissue and bone while suppressing the contrast between soft tissue and connective tissue, which is often a cause of false positives in ultrasonic bone segmentation problems. Both in-vitro and in-vivo experiments are performed to statistically analyze the performance of this method.
In the third project, a statistical shape model based bone surface segmentation method is proposed and investigated. This method uses statistical models to determine if a curve detected in a segmented ultrasound image belongs to a bone surface or not. Both in-vitro and in-vivo experiments are performed to statistically analyze the performance of this method.
I conclude this Dissertation with a discussion on possible future work in the field of ultrasound bone imaging and assessment.
Advisors/Committee Members: Righetti, Raffaella (advisor), Bettati, Riccardo (committee member), Wright, Steven M. (committee member), Liu, Tie (committee member).
Subjects/Keywords: Ultrasound Musculoskeletal Image; Elastography; Ultrasound Doppler Image; Bone segmentation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yang, X. (2018). New Image Processing Methods for Ultrasound Musculoskeletal Applications. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/173591
Chicago Manual of Style (16th Edition):
Yang, Xu. “New Image Processing Methods for Ultrasound Musculoskeletal Applications.” 2018. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/173591.
MLA Handbook (7th Edition):
Yang, Xu. “New Image Processing Methods for Ultrasound Musculoskeletal Applications.” 2018. Web. 21 Apr 2021.
Vancouver:
Yang X. New Image Processing Methods for Ultrasound Musculoskeletal Applications. [Internet] [Doctoral dissertation]. Texas A&M University; 2018. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/173591.
Council of Science Editors:
Yang X. New Image Processing Methods for Ultrasound Musculoskeletal Applications. [Doctoral Dissertation]. Texas A&M University; 2018. Available from: http://hdl.handle.net/1969.1/173591

Texas A&M University
6.
Xia, Xiangzhou.
Efficient and Scalable Listing of Four-Vertex Subgraph.
Degree: MS, Computer Science, 2016, Texas A&M University
URL: http://hdl.handle.net/1969.1/174219
► Identifying four-vertex subgraphs has long been recognized as a fundamental technique in bioinformatics and social networks. However, listing these structures is a challenging task, especially…
(more)
▼ Identifying four-vertex subgraphs has long been recognized as a fundamental technique in bioinformatics and social networks. However, listing these structures is a challenging task, especially for graphs that do not fit in RAM. To address this problem, we build a set of algorithms, models, and implementations that can handle massive graphs on commodity hardware. Our technique achieves 4 – 5 orders of magnitude speedup compared to the best prior methods on graphs with billions of edges, with external-memory operation equally efficient.
Advisors/Committee Members: Loguinov, Dmitri (advisor), Bettati, Riccardo (committee member), Reddy, A. L. Narasimha (committee member).
Subjects/Keywords: graph; motif
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xia, X. (2016). Efficient and Scalable Listing of Four-Vertex Subgraph. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/174219
Chicago Manual of Style (16th Edition):
Xia, Xiangzhou. “Efficient and Scalable Listing of Four-Vertex Subgraph.” 2016. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/174219.
MLA Handbook (7th Edition):
Xia, Xiangzhou. “Efficient and Scalable Listing of Four-Vertex Subgraph.” 2016. Web. 21 Apr 2021.
Vancouver:
Xia X. Efficient and Scalable Listing of Four-Vertex Subgraph. [Internet] [Masters thesis]. Texas A&M University; 2016. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/174219.
Council of Science Editors:
Xia X. Efficient and Scalable Listing of Four-Vertex Subgraph. [Masters Thesis]. Texas A&M University; 2016. Available from: http://hdl.handle.net/1969.1/174219

Texas A&M University
7.
Deka, Sthiti.
A CPU-GPU Hybrid Approach for Accelerating Cross-correlation Based Strain Elastography.
Degree: MS, Electrical Engineering, 2011, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7908
► Elastography is a non-invasive imaging modality that uses ultrasound to estimate the elasticity of soft tissues. The resulting images are called 'elastograms'. Elastography techniques are…
(more)
▼ Elastography is a non-invasive imaging modality that uses ultrasound to estimate the elasticity of soft tissues. The resulting images are called 'elastograms'. Elastography techniques are promising as cost-effective tools in the early detection of pathological changes in soft tissues. The quality of elastographic images depends on the accuracy of the local displacement estimates. Cross-correlation based displacement estimators are precise and sensitive. However cross-correlation based techniques are computationally intense and may limit the use of elastography as a real-time diagnostic tool. This study investigates the use of parallel general purpose graphics processing unit (GPGPU) engines for speeding up generation of elastograms at real-time frame rates while preserving elastographic image quality. To achieve this goal, a cross-correlation based time-delay estimation algorithm was developed in C programming language and was profiled to locate performance blocks. The hotspots were addressed by employing software pipelining, read-ahead and eliminating redundant computations. The algorithm was then analyzed for parallelization on GPGPU and the stages that would map well to the GPGPU hardware were identified. By employing optimization principles for efficient memory access and efficient execution, a net improvement of 67x with respect to the original optimized C version of the estimator was achieved. For typical diagnostic depths of 3-4cm and elastographic processing parameters, this implementation can yield elastographic frame rates in the order of 50fps. It was also observed that all of the stages in elastography cannot be offloaded to the GPGPU for computation because some stages have sub-optimal memory access patterns. Additionally, data transfer from graphics card memory to system memory can be efficiently overlapped with concurrent CPU execution. Therefore a hybrid model of computation where computational load is optimally distributed between CPU and GPGPU was identified as an optimal approach to adequately tackle the speed-quality problem in real-time imaging. The results of this research suggest that use of GPGPU as a co-processor to CPU may allow generation of elastograms at real time frame rates without significant compromise in image quality, a scenario that could be very favorable in real-time clinical elastography.
Advisors/Committee Members: Righetti, Raffaella (advisor), Kundur, Deepa (committee member), Ji, Jim (committee member), Bettati, Riccardo (committee member).
Subjects/Keywords: elastography; GPGPU; accelerating; speedup; cross-correlation; ultrasound
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Deka, S. (2011). A CPU-GPU Hybrid Approach for Accelerating Cross-correlation Based Strain Elastography. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7908
Chicago Manual of Style (16th Edition):
Deka, Sthiti. “A CPU-GPU Hybrid Approach for Accelerating Cross-correlation Based Strain Elastography.” 2011. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7908.
MLA Handbook (7th Edition):
Deka, Sthiti. “A CPU-GPU Hybrid Approach for Accelerating Cross-correlation Based Strain Elastography.” 2011. Web. 21 Apr 2021.
Vancouver:
Deka S. A CPU-GPU Hybrid Approach for Accelerating Cross-correlation Based Strain Elastography. [Internet] [Masters thesis]. Texas A&M University; 2011. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7908.
Council of Science Editors:
Deka S. A CPU-GPU Hybrid Approach for Accelerating Cross-correlation Based Strain Elastography. [Masters Thesis]. Texas A&M University; 2011. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7908

Texas A&M University
8.
Malave-Bonet, Javier.
A Benchmarking Platform For Network-On-Chip (NOC) Multiprocessor System-On- Chips.
Degree: MS, Computer Engineering, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8662
► Network-on-Chip (NOC) based designs have garnered significant attention from both researchers and industry over the past several years. The analysis of these designs has focused…
(more)
▼ Network-on-Chip (NOC) based designs have garnered significant attention from both
researchers and industry over the past several years. The analysis of these designs has
focused on broad topics such as NOC component micro-architecture, fault-tolerant
communication, and system memory architecture. Nonetheless, the design of lowlatency,
high-bandwidth, low-power and area-efficient NOC is extremely complex due
to the conflicting nature of these design objectives. Benchmarks are an indispensable
tool in the design process; providing thorough measurement and fair comparison
between designs in order to achieve optimal results (i.e performance, cost, quality of
service).
This research proposes a benchmarking platform called NoCBench for evaluating
the performance of Network-on-chip. Although previous research has proposed standard
guidelines to develop benchmarks for Network-on-Chip, this work moves forward and
proposes a System-C based simulation platform for system-level design exploration. It
will provide an initial set of synthetic benchmarks for on-chip network interconnection
validation along with an initial set of standardized processing cores, NOC components,
and system-wide services.
The benchmarks were constructed using synthetic applications described by Task
Graphs For Free (TGFF) task graphs extracted from the E3S benchmark suite. Two
benchmarks were used for characterization: Consumer and Networking. They are
characterized based on throughput and latency. Case studies show how they can be used
to evaluate metrics beyond throughput and latency (i.e. traffic distribution).
The contribution of this work is two-fold: 1) This study provides a methodology
for benchmark creation and characterization using NoCBench that evaluates important
metrics in NOC design (i.e. end-to-end packet delay, throughput). 2) The developed
full-system simulation platform provides a complete environment for further benchmark
characterization on NOC based MpSoC as well as system-level design space
exploration.
Advisors/Committee Members: Mahapatra, Rabi N. (advisor), Bettati, Riccardo (committee member), Gratz, Paul (committee member).
Subjects/Keywords: Network-on-Chip; Benchmarking; System-on-Chip; Multicore; System C
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Malave-Bonet, J. (2012). A Benchmarking Platform For Network-On-Chip (NOC) Multiprocessor System-On- Chips. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8662
Chicago Manual of Style (16th Edition):
Malave-Bonet, Javier. “A Benchmarking Platform For Network-On-Chip (NOC) Multiprocessor System-On- Chips.” 2012. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8662.
MLA Handbook (7th Edition):
Malave-Bonet, Javier. “A Benchmarking Platform For Network-On-Chip (NOC) Multiprocessor System-On- Chips.” 2012. Web. 21 Apr 2021.
Vancouver:
Malave-Bonet J. A Benchmarking Platform For Network-On-Chip (NOC) Multiprocessor System-On- Chips. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8662.
Council of Science Editors:
Malave-Bonet J. A Benchmarking Platform For Network-On-Chip (NOC) Multiprocessor System-On- Chips. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8662

Texas A&M University
9.
Liu, Sanmin.
Towards Privacy Preserving of Forensic DNA Databases.
Degree: MS, Computer Science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10337
► Protecting privacy of individuals is critical for forensic genetics. In a kinship/identity testing, related DNA profiles between user's query and the DNA database need to…
(more)
▼ Protecting privacy of individuals is critical for forensic genetics. In a kinship/identity testing, related DNA profiles between user's query and the DNA database need to be extracted. However, unrelated profiles cannot be revealed to each other. The challenge is today's DNA database usually contains millions of DNA profiles, which is too big to perform privacy-preserving query with current cryptosystem directly. In this thesis, we propose a scalable system to support privacy-preserving query in DNA Database. A two-phase strategy is designed: the first is a Short Tandem Repeat index tree for quick fetching candidate profiles from disk. It groups loci of DNA profiles by matching probability, so as to reduce I/O cost required to find a particular profile. The second is an Elliptic Curve Cryptosystem based privacy-preserving matching engine, which performs match between candidates and user's sample. In particular, a privacy-preserving DNA profile matching algorithm is designed, which achieves O(n) computing time and communication cost. Experimental results show that our system performs well at query latency, query hit rate, and communication cost. For a database of one billion profiles, it takes 80 seconds to return results to the user.
Advisors/Committee Members: Liu, Jyh-Charn (advisor), Bettati, Riccardo (committee member), Yuan, Shuhua (committee member).
Subjects/Keywords: Forensic DNA Database; Privacy Preserving
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, S. (2012). Towards Privacy Preserving of Forensic DNA Databases. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10337
Chicago Manual of Style (16th Edition):
Liu, Sanmin. “Towards Privacy Preserving of Forensic DNA Databases.” 2012. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10337.
MLA Handbook (7th Edition):
Liu, Sanmin. “Towards Privacy Preserving of Forensic DNA Databases.” 2012. Web. 21 Apr 2021.
Vancouver:
Liu S. Towards Privacy Preserving of Forensic DNA Databases. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10337.
Council of Science Editors:
Liu S. Towards Privacy Preserving of Forensic DNA Databases. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10337

Texas A&M University
10.
Li, Xiaoyong.
Distributed Synchronization Under Data Churn.
Degree: PhD, Computer Science, 2016, Texas A&M University
URL: http://hdl.handle.net/1969.1/157023
► Nowadays an increasing number of applications need to maintain local copies of remote data sources to provide services to their users. Because of the dynamic…
(more)
▼ Nowadays an increasing number of applications need to maintain local copies of remote data sources to provide services to their users. Because of the dynamic nature of the sources, an application has to synchronize its copies with remote sources constantly to provide reliable services. Instead of push-based synchronization, we focus on pull-based strategy because it doesn’t require source cooperation and has been widely adopted by existing systems.
The scalability of the pull-based synchronization comes at the expense of increased inconsistency of the copied content. We model this system under non-Poisson update/refresh processes and obtain sample-path averages of various metrics of staleness cost, generalizing previous results and studying its statistical properties.
Computing staleness requires knowledge of the inter-update distribution at the source, which can only be estimated through blind sampling – periodic downloads and comparison against previous copies. We show that all previous approaches are biased unless the observation rate tends to infinity or the update process is Poisson. To overcome these issues, we propose four new algorithms that achieve various levels of consistency, which depend on the amount of temporal information revealed by the source and capabilities of the download process.
Then we focus on applying freshness to P2P replication systems. We extend our results to several more difficult algorithms – cascaded replication, cooperative caching, and redundant querying from the clients. Surprisingly, we discover that optimal cooperation involves just a single peer and that redundant querying can hurt the ability of the system to handle load (i.e., may lead to lower scalability).
Advisors/Committee Members: Loguinov, Dmitri (advisor), Bettati, Riccardo (committee member), Caverlee, Jame (committee member), Reddy, Narasimha (committee member).
Subjects/Keywords: data synchronization; freshness
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, X. (2016). Distributed Synchronization Under Data Churn. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/157023
Chicago Manual of Style (16th Edition):
Li, Xiaoyong. “Distributed Synchronization Under Data Churn.” 2016. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/157023.
MLA Handbook (7th Edition):
Li, Xiaoyong. “Distributed Synchronization Under Data Churn.” 2016. Web. 21 Apr 2021.
Vancouver:
Li X. Distributed Synchronization Under Data Churn. [Internet] [Doctoral dissertation]. Texas A&M University; 2016. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/157023.
Council of Science Editors:
Li X. Distributed Synchronization Under Data Churn. [Doctoral Dissertation]. Texas A&M University; 2016. Available from: http://hdl.handle.net/1969.1/157023

Texas A&M University
11.
Zhang, Jialong.
Understanding and Detecting Malicious Cyber Infrastructures.
Degree: PhD, Computer Engineering, 2016, Texas A&M University
URL: http://hdl.handle.net/1969.1/158989
► Malware (e.g., trojans, bots, and spyware) is still a pervasive threat on the Internet. It is able to infect computer systems to further launch a…
(more)
▼ Malware (e.g., trojans, bots, and spyware) is still a pervasive threat on the Internet. It is able to infect computer systems to further launch a variety of malicious activities such as sending spam, stealing sensitive information and launching distributed denial-of-service (DDoS) attacks. In order to continue malevolent activities without being detected and to improve the efficiency of malicious activities, cyber-criminals tend to build malicious cyber infrastructures to communicate with their malware and to exploit benign users. In these infrastructures, multiple servers are set to be efficient and anonymous in (i) malware distribution (using redirectors and exploit servers), (ii) control (using C&C servers), (iii) monetization (using payment servers), and (iv) robustness against server takedowns (using multiple backups for each type of server).
The most straightforward way to counteract the malware threat is to detect malware directly on infected hosts. However, it is difficult since packing and obfuscation techniques are frequently used by malware to evade state-of-the-art anti-virus tools. Therefore, an alternate solution is to detect and disrupt the malicious cyber infrastructures used by malware. In this dissertation, we take an important step in this direction and focus on identifying malicious servers behind those malicious cyber infrastructures. We present a comprehensive inferring framework to infer servers involved in malicious cyber infrastructure based on the three roles of those servers: compromised server, malicious server accessed through redirection and malicious server accessed through directly connecting. We characterize these three roles from four novel perspectives and demonstrate our detection technologies in four systems: PoisonAmplifier, SMASH, VisHunter and NeighbourWatcher. PoisonAmplifier focuses on compromised servers. It explores the fact that cybercriminals tend to use compromised servers to trick benign users during the attacking process. Therefore, it is designed to proactively find more compromised servers. SMASH focuses on malicious servers accessed through directly connecting. It explores the fact that multiple backups are usually used in malicious cyber infrastructures to avoid server takedowns. Therefore, it leverages the correlation among malicious servers to infer a group of malicious servers. VisHunter focuses on the redirections from compromised servers to malicious servers. It explores the fact that cybercriminals usually conceal their core malicious servers. Therefore, it is designed to detect those “invisible” malicious servers. NeighbourWatcher focuses on all general malicious servers promoted by spammers. It explores the observation that spammers intend to promote some servers (e.g., phishing servers) on the special websites (e.g., forum and wikis) to trick benign users and to improve the reputation of their malicious servers. In short, we build a comprehensive inferring framework to identify servers involved in malicious cyber infrastructures from four novel…
Advisors/Committee Members: Gu, Guofei (advisor), Bettati, Riccardo (committee member), Caverlee, James (committee member), Reddy, Narasimha (committee member).
Subjects/Keywords: malicious cyber infrastructure; network security; malicious server detection
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhang, J. (2016). Understanding and Detecting Malicious Cyber Infrastructures. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/158989
Chicago Manual of Style (16th Edition):
Zhang, Jialong. “Understanding and Detecting Malicious Cyber Infrastructures.” 2016. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/158989.
MLA Handbook (7th Edition):
Zhang, Jialong. “Understanding and Detecting Malicious Cyber Infrastructures.” 2016. Web. 21 Apr 2021.
Vancouver:
Zhang J. Understanding and Detecting Malicious Cyber Infrastructures. [Internet] [Doctoral dissertation]. Texas A&M University; 2016. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/158989.
Council of Science Editors:
Zhang J. Understanding and Detecting Malicious Cyber Infrastructures. [Doctoral Dissertation]. Texas A&M University; 2016. Available from: http://hdl.handle.net/1969.1/158989

Texas A&M University
12.
Roshan, Rakesh.
Remote USB Ports.
Degree: MS, Computer Science and Engineering, 2013, Texas A&M University
URL: http://hdl.handle.net/1969.1/151722
► Simplicity, easy to install, plug & play, high bandwidth, low latency and source of power are features of USB devices. Due to these features, many…
(more)
▼ Simplicity, easy to install, plug & play, high bandwidth, low latency and source of power are features of USB devices. Due to these features, many sensors and actuators are manufactured with USB interfaces for use in industries. The sensors and actuators need to be installed in fields. A computer system with USB interfaces is required to be present at the location of USB device for its working. In industry, these sensors and actuators are scattered over a large geographical area. The computers connected to them expose a large attack surface. These computers can be consolidated using virtualization and networking to reduce the attack surface. In order to consolidate computers, we need solution to extend USB port over networks so that, a USB sensor or actuator placed in fields can be accessed by a system remotely and securely.
In this thesis, we propose a remote USB port, which is an abstraction of a USB port. In the USB core driver of the server machine, with the hub information, port status of all the ports is stored in a port status table. On the client machine a virtual host driver is created to manage proxy USB ports. When a device is inserted or removed from the USB port on the server, the client gets notified and corresponding device driver is loaded or unloaded respectively. To secure URBs, URB headers are encrypted before sending them over networks. We have implemented our solution in the Linux 3.5 kernel. We tested our solution on two machines connected over a 100 Mbps network. Various different types of USB devices were connected in the server machine and tested from the client machine. We found our solution to be device, device driver and USB protocol independent and transparent to network and device failures.
Advisors/Committee Members: Bettati, Riccardo (advisor), Reddy, A. L. Narasimha (committee member), Stoleru, Radu (committee member).
Subjects/Keywords: USB; Network; Tunneling; IP; TCP
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Roshan, R. (2013). Remote USB Ports. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/151722
Chicago Manual of Style (16th Edition):
Roshan, Rakesh. “Remote USB Ports.” 2013. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/151722.
MLA Handbook (7th Edition):
Roshan, Rakesh. “Remote USB Ports.” 2013. Web. 21 Apr 2021.
Vancouver:
Roshan R. Remote USB Ports. [Internet] [Masters thesis]. Texas A&M University; 2013. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/151722.
Council of Science Editors:
Roshan R. Remote USB Ports. [Masters Thesis]. Texas A&M University; 2013. Available from: http://hdl.handle.net/1969.1/151722

Texas A&M University
13.
Chung, Hyun-Chul.
Information Infrastructures in Distributed Environments: Algorithms for Mobile Networks and Resource Allocation.
Degree: PhD, Computer Science, 2013, Texas A&M University
URL: http://hdl.handle.net/1969.1/151735
► A distributed system is a collection of computing entities that communicate with each other to solve some problem. Distributed systems impact almost every aspect of…
(more)
▼ A distributed system is a collection of computing entities that communicate with each other to solve some problem. Distributed systems impact almost every aspect of daily life (e.g., cellular networks and the Internet); however, it is hard to develop services on top of distributed systems due to the unreliable nature of computing entities and communication. As handheld devices with wireless communication capabilities become increasingly popular, the task of providing services becomes even more challenging since dynamics, such as mobility, may cause the network topology to change frequently. One way to ease this task is to develop collections of information infrastructures which can serve as building blocks to design more complicated services and can be analyzed independently.
The first part of the dissertation considers the dining philosophers problem (a generalization of the mutual exclusion problem) in static networks. A solution to the dining philosophers problem can be utilized when there is a need to prevent multiple nodes from accessing some shared resource simultaneously. We present two algorithms that solve the dining philosophers problem. The first algorithm considers an asynchronous message-passing model while the second one considers an asynchronous shared-memory model. Both algorithms are crash fault-tolerant in the sense that a node crash only affects its local neighborhood in the network. We utilize failure detectors (system services that provide some information about crash failures in the system) to achieve such crash fault-tolerance. In addition to crash fault-tolerance, the first algorithm provides fairness in accessing shared resources and the second algorithm tolerates transient failures (unexpected corruptions to the system state). Considering the message-passing model, we also provide a reduction such that given a crash fault-tolerant solution to our dining philosophers problem, we implement the failure detector that we have utilized to solve our dining philosophers problem. This reduction serves as the first step towards identifying the minimum information regarding crash failures that is required to solve the dining philosophers problem at hand.
In the second part of this dissertation, we present information infrastructures for mobile ad hoc networks. In particular, we present solutions to the following problems in mobile ad hoc environments: (1) maintaining neighbor knowledge, (2) neighbor detection, and (3) leader election. The solutions to (1) and (3) consider a system with perfectly synchronized clocks while the solution to (2) considers a system with bounded clock drift. Services such as neighbor detection and maintaining neighbor knowledge can serve as a building block for applications that require point-to-point communication. A solution to the leader election problem can be used whenever there is a need for a unique coordinator in the system to perform a special task.
Advisors/Committee Members: Welch, Jennifer L. (advisor), Bettati, Riccardo (committee member), Jiang, Anxiao (committee member), Sprintson, Alexander (committee member).
Subjects/Keywords: Distributed Computing; Mobile Ad Hoc Networks; Leader Election; Neighbor Detection; Resource Allocation; Dining Philosophers Problem
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chung, H. (2013). Information Infrastructures in Distributed Environments: Algorithms for Mobile Networks and Resource Allocation. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/151735
Chicago Manual of Style (16th Edition):
Chung, Hyun-Chul. “Information Infrastructures in Distributed Environments: Algorithms for Mobile Networks and Resource Allocation.” 2013. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/151735.
MLA Handbook (7th Edition):
Chung, Hyun-Chul. “Information Infrastructures in Distributed Environments: Algorithms for Mobile Networks and Resource Allocation.” 2013. Web. 21 Apr 2021.
Vancouver:
Chung H. Information Infrastructures in Distributed Environments: Algorithms for Mobile Networks and Resource Allocation. [Internet] [Doctoral dissertation]. Texas A&M University; 2013. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/151735.
Council of Science Editors:
Chung H. Information Infrastructures in Distributed Environments: Algorithms for Mobile Networks and Resource Allocation. [Doctoral Dissertation]. Texas A&M University; 2013. Available from: http://hdl.handle.net/1969.1/151735

Texas A&M University
14.
Pu, Shi.
GPU-based Parallel Computing Models and Implementations for Two-party Privacy-preserving Protocols.
Degree: PhD, Computer Science, 2013, Texas A&M University
URL: http://hdl.handle.net/1969.1/151845
► In (two-party) privacy-preserving-based applications, two users use encrypted inputs to compute a function without giving out plaintext of their input values. Privacy-preserving computing algorithms have…
(more)
▼ In (two-party) privacy-preserving-based applications, two users use encrypted inputs to compute a function without giving out plaintext of their input values. Privacy-preserving computing algorithms have to utilize a large amount of computing resources to handle the encryption-decryption operations. In this dissertation, we study optimal utilization of computing resources on the graphic processor unit (GPU) architecture for privacy-preserving protocols based on secure function evaluation (SFE) and the Elliptic Curve Cryptographic (ECC) and related algorithms. A number of privacy-preserving protocols are implemented, including private set intersection (PSI), secret handshaking (SH), secure Edit distance (ED) and Smith-Waterman (SW) problems. PSI is chosen to represent ECC point multiplication related computations, SH for bilinear pairing, and the last two for SFE-based dynamic programming (DP) problems. They represent different types of computations, so that in-depth understanding of the benefits and limitations of the GPU architecture for privacy preserving protocols is gained.
For SFE-based ED and SW problems, a wavefront parallel computing model on the CPU-GPU architecture under the semi-honest security model is proposed. Low level parallelization techniques for GPU-based gate (de-)garbler, synchronized parallel memory access, pipelining, and general GPU resource mapping policies are developed. This dissertation shows that the GPU architecture can be fully utilized to speed up SFE-based ED and SW algorithms, which are constructed with billions of garbled gates, on a contemporary GPU card GTX-680, with very little waste of processing cycles or memory space.
For PSI and SH protocols and underlying ECC algorithms, the analysis in this research shows that the conventional Montgomery-based number system is more friendly to the GPU architecture than the Residue Number System (RNS) is. Analysis on experiment results further shows that the lazy reduction in higher extension fields can have performance benefits only when the GPU architecture has enough fast memory. The resulting Elliptic curve Arithmetic GPU Library (EAGL) can run 3350.9 R-ate (bilinear) pairing/sec, and 47000 point multiplication/sec at the 128-bit security level, on one GTX-680 card. The primary performance bottleneck is found to be lacking of advanced memory management functions in the contemporary GPU architecture for bilinear pairing operations. Substantial performance gain can be expected when the on-chip memory size and/or more advanced memory prefetching mechanisms are supported in future generations of GPUs.
Advisors/Committee Members: Liu, Jyh-charn (advisor), Bettati, Riccardo (committee member), Gu, Guofei (committee member), Li, Peng (committee member).
Subjects/Keywords: Privacy-preserving computing; Secure Function Evaluation; Dynamic Programming; Elliptic Curve Cryptography; GPU
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Pu, S. (2013). GPU-based Parallel Computing Models and Implementations for Two-party Privacy-preserving Protocols. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/151845
Chicago Manual of Style (16th Edition):
Pu, Shi. “GPU-based Parallel Computing Models and Implementations for Two-party Privacy-preserving Protocols.” 2013. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/151845.
MLA Handbook (7th Edition):
Pu, Shi. “GPU-based Parallel Computing Models and Implementations for Two-party Privacy-preserving Protocols.” 2013. Web. 21 Apr 2021.
Vancouver:
Pu S. GPU-based Parallel Computing Models and Implementations for Two-party Privacy-preserving Protocols. [Internet] [Doctoral dissertation]. Texas A&M University; 2013. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/151845.
Council of Science Editors:
Pu S. GPU-based Parallel Computing Models and Implementations for Two-party Privacy-preserving Protocols. [Doctoral Dissertation]. Texas A&M University; 2013. Available from: http://hdl.handle.net/1969.1/151845

Texas A&M University
15.
Varghese, Joshua.
Effect of Temporal Acquisition Parameters on the Image Quality of Ultrasound Axial Strain Time-constant Elastograms.
Degree: MS, Electrical Engineering, 2011, Texas A&M University
URL: http://hdl.handle.net/1969.1/153197
► Recent developments in ultrasound elastography have suggested the possibility of using elastographic methods to estimate the temporal mechanical properties of complex tissues. In this context,…
(more)
▼ Recent developments in ultrasound elastography have suggested the possibility of using elastographic methods to estimate the temporal mechanical properties of complex tissues. In this context, elastographic methods to image the axial strain time constant (TC) have been developed. The axial strain TC is a parameter that is related to the viscoelastic and poroelastic behavior of tissues. Estimation of this parameter can be done using curve fitting methods. However, the effect of temporal ultrasonic acquisition parameters, such as window of observation, acquisition rate, and input noise, on the image quality of the resultant TC elastograms has not been investigated yet. Elucidating such effects could be useful for diagnostic applications.
This work explores the effects of varying windows of observation, acquisition rate, and input noise on the image quality (accuracy and signal-to-noise ratio (SNR)) of axial strain TC estimates and elastograms using a previously developed simulation model. By varying the amount of data collected as a percentage of the expected TC, the algorithms were able to compute a minimum threshold collection time for an accurate TC estimation as a percentage of the expected TC. The effect of acquisition parameters such as acquisition rate and input noise on the minimum threshold collection time was assessed. Experimental data, collected for previous experiments, were used as a proof of principle to corroborate the simulation findings.
The results of this work suggest that there is a linear dependence of the total acquisition time necessary for accurate TC estimates on the true time constant value. The simulation results also indicate that it might be possible to make accurate estimates of the axial strain TC using small windows of observation (as small as 20% of the expected TC) with fast acquisition rates and high input SNR levels. Experimental results suggest that, in practice, a larger window of observation should be used to account for multiple noise sources typically not considered in simulations. This work also suggests that the minimum window of observation necessary for an accurate TC estimate is highly dependent on the acquisition frame rate and the input SNR level. Therefore, use of imaging systems with fast acquisition rates is recommended for studies aiming at measuring time-dependent phenomena in tissues.
Advisors/Committee Members: Righetti, Raffaella (advisor), Ji, Jim (committee member), Kundur, Deepa (committee member), Bettati, Riccardo (committee member).
Subjects/Keywords: Time constant estimation; Elastography; Ultrasound
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Varghese, J. (2011). Effect of Temporal Acquisition Parameters on the Image Quality of Ultrasound Axial Strain Time-constant Elastograms. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/153197
Chicago Manual of Style (16th Edition):
Varghese, Joshua. “Effect of Temporal Acquisition Parameters on the Image Quality of Ultrasound Axial Strain Time-constant Elastograms.” 2011. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/153197.
MLA Handbook (7th Edition):
Varghese, Joshua. “Effect of Temporal Acquisition Parameters on the Image Quality of Ultrasound Axial Strain Time-constant Elastograms.” 2011. Web. 21 Apr 2021.
Vancouver:
Varghese J. Effect of Temporal Acquisition Parameters on the Image Quality of Ultrasound Axial Strain Time-constant Elastograms. [Internet] [Masters thesis]. Texas A&M University; 2011. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/153197.
Council of Science Editors:
Varghese J. Effect of Temporal Acquisition Parameters on the Image Quality of Ultrasound Axial Strain Time-constant Elastograms. [Masters Thesis]. Texas A&M University; 2011. Available from: http://hdl.handle.net/1969.1/153197

Texas A&M University
16.
Xu, Zhaoyan.
Analysis and Defense of Emerging Malware Attacks.
Degree: PhD, Computer Engineering, 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/153235
► The persistent evolution of malware intrusion brings great challenges to current anti-malware industry. First, the traditional signature-based detection and prevention schemes produce outgrown signature databases…
(more)
▼ The persistent evolution of malware intrusion brings great challenges to current anti-malware industry. First, the traditional signature-based detection and prevention schemes produce outgrown signature databases for each end-host user and user has to install the AV tool and tolerate consuming huge amount of resources for pairwise matching. At the other side of malware analysis, the emerging malware can detect its running environment and determine whether it should infect the host or not. Hence, traditional dynamic malware analysis can no longer find the desired malicious logic if the targeted environment cannot be extracted in advance. Both these two problems uncover that current malware defense schemes are too passive and reactive to fulfill the task.
The goal of this research is to develop new analysis and protection schemes for the emerging malware threats. Firstly, this dissertation performs a detailed study on recent targeted malware attacks. Based on the study, we develop a new technique to perform effectively and efficiently targeted malware analysis. Second, this dissertation studies a new trend of massive malware intrusion and proposes a new protection scheme to proactively defend malware attack. Lastly, our focus is new P2P malware. We propose a new scheme, which is named as informed active probing, for large-scale P2P malware analysis and detection. In further, our internet-wide evaluation shows
our active probing scheme can successfully detect malicious P2P malware and its corresponding malicious servers.
Advisors/Committee Members: Gu, Guofei (advisor), Bettati, Riccardo (committee member), Shi, Weiping (committee member), Liu, Jyh-Charn (committee member).
Subjects/Keywords: Computer Security; Malware Analysis and Defense
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xu, Z. (2014). Analysis and Defense of Emerging Malware Attacks. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/153235
Chicago Manual of Style (16th Edition):
Xu, Zhaoyan. “Analysis and Defense of Emerging Malware Attacks.” 2014. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/153235.
MLA Handbook (7th Edition):
Xu, Zhaoyan. “Analysis and Defense of Emerging Malware Attacks.” 2014. Web. 21 Apr 2021.
Vancouver:
Xu Z. Analysis and Defense of Emerging Malware Attacks. [Internet] [Doctoral dissertation]. Texas A&M University; 2014. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/153235.
Council of Science Editors:
Xu Z. Analysis and Defense of Emerging Malware Attacks. [Doctoral Dissertation]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/153235

Texas A&M University
17.
George, Stephen M.
Measurement to Intelligence: Feature Extraction, Modeling and Predictive Analysis of Asymmetric Conflict Events.
Degree: PhD, Computer Science, 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/153293
► The conflict events that comprise asymmetric warfare are a primary killer of both combatants and civilians on the modern battlefield. Improvised explosive devices (IED) and…
(more)
▼ The conflict events that comprise asymmetric warfare are a primary killer of both combatants and civilians on the modern battlefield. Improvised explosive devices (IED) and direct fire (DF), the most common of these attacks, claim thousands of lives as conventional and unconventional forces clash. Computer-based predictive analysis can be used to identify locations that are useful for these events, potentially providing the awareness needed to disrupt or avoid attacks before they are launched.
In this dissertation, I propose an analytical framework for predictive analysis of asymmetric conflict events. This framework incorporates a tactics-aware system model based on attacker roles that is populated with a set of geomorphometric and visibility-constrained features describing terrain and proximity to necessary supporting structures. Features that identify and assess the utility of terrain for use by risk-averse attackers are important contributors to the model. Statistical learning is used to extract spatially and temporally constrained tactical patterns. These patterns are then used to predict the utility of future or unvisited locations for conflict events.
Major contributions of this dissertation include:
(1) A concise, accurate feature representation of conflict events in non-urban environments;
(2) A system model based on attacker roles that captures the tactical patterns of conflict events;
(3) Accurate conflict event classification algorithms that support predictive analysis; and
(4) A novel method for detecting and describing features that support risk-averse attackers.
The framework has been implemented and tested on real-world IED and DF data collected from the conflict in Afghanistan in 2011-2012. Several learning techniques are assessed using two dimensionality reduction schemes under a variety of spatial, temporal and combined constraints. A resource-unconstrained version of the framework accurately predicts conflict events across a wide range of terrain types and over the 19 months covered by available data. A limited version of the framework that assumes less computational capability provides useful predictive analysis that can be performed in mobile and resource constrained environments.
Advisors/Committee Members: Liu, Jyh-Charn (advisor), Bettati, Riccardo (committee member), Gu, Guofei (committee member), Bishop, Michael (committee member).
Subjects/Keywords: improvised explosive device; asymmetric warfare; machine learning; geomorphometry; feature extraction; risk aversion; Afghanistan
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
George, S. M. (2014). Measurement to Intelligence: Feature Extraction, Modeling and Predictive Analysis of Asymmetric Conflict Events. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/153293
Chicago Manual of Style (16th Edition):
George, Stephen M. “Measurement to Intelligence: Feature Extraction, Modeling and Predictive Analysis of Asymmetric Conflict Events.” 2014. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/153293.
MLA Handbook (7th Edition):
George, Stephen M. “Measurement to Intelligence: Feature Extraction, Modeling and Predictive Analysis of Asymmetric Conflict Events.” 2014. Web. 21 Apr 2021.
Vancouver:
George SM. Measurement to Intelligence: Feature Extraction, Modeling and Predictive Analysis of Asymmetric Conflict Events. [Internet] [Doctoral dissertation]. Texas A&M University; 2014. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/153293.
Council of Science Editors:
George SM. Measurement to Intelligence: Feature Extraction, Modeling and Predictive Analysis of Asymmetric Conflict Events. [Doctoral Dissertation]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/153293

Texas A&M University
18.
Shankar, Anusha.
Lock Prediction to Reduce the Overhead of Synchronization Primitives.
Degree: MS, Computer Science, 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/154146
► The advent of chip multi-processors has led to an increase in computational performance in recent years. Employing efficient parallel algorithms has become important to harness…
(more)
▼ The advent of chip multi-processors has led to an increase in computational performance in recent years. Employing efficient parallel algorithms has become important to harness the full potential of multiple cores. One of the major productivity limitation in parallel programming arises due to use of Synchronization Primitives. The primitives are used to enforce mutual exclusion on critical section data. Most shared-memory multi-processor architectures provide hardware support for mutually exclusive access on shared data structures using lock and unlock operations. These operations are implemented in hardware as a set of instructions that atomically read and then write to a single memory location. Good synchronization techniques should try to reduce network bandwidth, have low access time in acquiring locks and be fair in granting requests.
In a typical directory controller based locking scheme, each thread communicates with the directory controller for lock request and lock release. The overhead of this design includes communication with the directory controller for each step of lock acquisition, and this causes high latency transactions. Thus, a significant amount of time is spent in communication as compared to the actual operation.
Previous works have focused on reducing the communication to home node through various techniques. One such technique of interest is the Implicit Queue on Lock Bit Technique (IQOLB). In this technique, the lock is forwarded directly to the requestor from the thread currently holding the lock without communication through the home node. Limitations of the method include the following: the forwarding operation can take place only after the current thread holding the lock has received information about the new lock requestor from the home node and also modification to cache coherence protocol to distinguish a regular memory read request and a synchronization
request.
Very little research has been performed in the area of lock prediction. We believe based on data analysis that lock communication is predictable and the prediction can improve performance significantly. This research focuses on predicting the sequence in which locks are acquired so that the thread currently holding the lock can preemptively invalidate the locked cache line and forward the same to subsequent requestors and hence reduce the time taken to acquire a lock. The predictor is adaptive: whenever a lock is biased towards a thread, it will remain in the cache of that particular thread, and invalidation will not take place. The benefits of the technique include reduction in the number of messages exchanged with the home node without any modification to the cache coherence protocol (does not distinguish a regular memory read request and synchronization request). The results of the evaluation of lock predictor on PARSEC benchmark suite shows an improvement in overall performance by an average of 9 % over the base case.
Advisors/Committee Members: Gratz, Paul V (advisor), Bettati, Riccardo (advisor), Amato, Nancy M (committee member).
Subjects/Keywords: locks; synchronization primitives
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shankar, A. (2014). Lock Prediction to Reduce the Overhead of Synchronization Primitives. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/154146
Chicago Manual of Style (16th Edition):
Shankar, Anusha. “Lock Prediction to Reduce the Overhead of Synchronization Primitives.” 2014. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/154146.
MLA Handbook (7th Edition):
Shankar, Anusha. “Lock Prediction to Reduce the Overhead of Synchronization Primitives.” 2014. Web. 21 Apr 2021.
Vancouver:
Shankar A. Lock Prediction to Reduce the Overhead of Synchronization Primitives. [Internet] [Masters thesis]. Texas A&M University; 2014. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/154146.
Council of Science Editors:
Shankar A. Lock Prediction to Reduce the Overhead of Synchronization Primitives. [Masters Thesis]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/154146

Texas A&M University
19.
Fatehi, Ehsan.
ILP and TLP in Shared Memory Applications: A Limit Study.
Degree: PhD, Computer Engineering, 2015, Texas A&M University
URL: http://hdl.handle.net/1969.1/155119
► The work in this dissertation explores the limits of Chip-multiprocessors (CMPs) with respect to shared-memory, multi-threaded benchmarks, which will help aid in identifying microarchitectural bottlenecks.…
(more)
▼ The work in this dissertation explores the limits of Chip-multiprocessors (CMPs) with respect to shared-memory, multi-threaded benchmarks, which will help aid in identifying microarchitectural bottlenecks. This, in turn, will lead to more efficient CMP design.
In the first part we introduce DotSim, a trace-driven toolkit designed to explore the limits of instruction and thread-level scaling and identify microarchitectural bottlenecks in multi-threaded applications. DotSim constructs an instruction-level Data Flow Graph (DFG) from each thread in multi-threaded applications, adjusting for inter-thread dependencies. The DFGs dynamically change depending on the microarchitectural constraints applied. Exploiting these DFGs allows for the easy extraction of the performance upper bound. We perform a case study on modeling the upper-bound performance limits of a processor microarchitecture modeled off a AMD Opteron.
In the second part, we conduct a limit study simultaneously analyzing the two dominant forms of parallelism exploited by modern computer architectures: Instruction Level Parallelism (ILP) and Thread Level Parallelism (TLP). This study gives insight into the upper bounds of performance that future architectures can achieve. Furthermore, it identifies the bottlenecks of emerging workloads. To the best of our knowledge, our work is the first study that combines the two forms of parallelism into one study with modern applications. We evaluate the PARSEC multithreaded benchmark suite using DotSim. We make several contributions describing the high-level behavior of next-generation applications. For example, we show that these applications contain up to a factor of 929X more ILP than what is currently being extracted from real machines. We then show the effects of breaking the application into increasing numbers of threads (exploiting TLP), instruction window size, realistic branch prediction, realistic memory latency, and thread dependencies on exploitable ILP. Our examination shows that theses benchmarks differ vastly from one another. As a result, we expect that no single, homogeneous, micro-architecture will work optimally for all, arguing for reconfigurable, heterogeneous designs.
In the third part of this thesis, we use our novel simulator DotSim to study the benefits of prefetching shared memory within critical sections. In this chapter we calculate the upper bound of performance under our given constraints. Our intent is to provide motivation for new techniques to exploit the potential benefits of reducing latency of shared memory among threads. We conduct an idealized workload characterization study focusing on the data that is truly shared among threads, using a simplified memory model. We explore the degree of shared memory criticality, and characterize the benefits of being able to use latency reducing techniques to reduce execution time and increase ILP. We find that on average true sharing among benchmarks is quite low compared to overall memory accesses on the critical path and overall program. We also…
Advisors/Committee Members: Gratz, Paul V (advisor), Reddy, Narasimha (committee member), Palermo, Sam (committee member), Bettati, Riccardo (committee member).
Subjects/Keywords: instruction-level parallelism; limits parallelism; concurrency pthreads; thread-level parallelism
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Fatehi, E. (2015). ILP and TLP in Shared Memory Applications: A Limit Study. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/155119
Chicago Manual of Style (16th Edition):
Fatehi, Ehsan. “ILP and TLP in Shared Memory Applications: A Limit Study.” 2015. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/155119.
MLA Handbook (7th Edition):
Fatehi, Ehsan. “ILP and TLP in Shared Memory Applications: A Limit Study.” 2015. Web. 21 Apr 2021.
Vancouver:
Fatehi E. ILP and TLP in Shared Memory Applications: A Limit Study. [Internet] [Doctoral dissertation]. Texas A&M University; 2015. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/155119.
Council of Science Editors:
Fatehi E. ILP and TLP in Shared Memory Applications: A Limit Study. [Doctoral Dissertation]. Texas A&M University; 2015. Available from: http://hdl.handle.net/1969.1/155119

Texas A&M University
20.
Sarma, Anurupa.
Quantum codes over Finite Frobenius Rings.
Degree: MS, Computer Science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11529
► It is believed that quantum computers would be able to solve complex problems more quickly than any other deterministic or probabilistic computer. Quantum computers basically…
(more)
▼ It is believed that quantum computers would be able to solve complex problems more quickly than any other deterministic or probabilistic computer. Quantum computers basically exploit the rules of quantum mechanics for speeding up computations. However, building a quantum computer remains a daunting task. A quantum computer, as in any quantum mechanical system, is susceptible to decohorence of quantum bits resulting from interaction of the stored information with the environment. Error correction is then required to restore a quantum bit, which has changed due to interaction with external state, to a previous non-erroneous state in the coding subspace. Until now the methods for quantum error correction were mostly based on stabilizer codes over finite fields. The aim of this thesis is to construct quantum error correcting codes over finite Frobenius rings. We introduce stabilizer codes over quadratic algebra, which allows one to use the hamming distance rather than some less known notion of distance. We also develop propagation rules to build new codes from existing codes. Non binary codes have been realized as a gray image of linear Z4 code, hence the most natural class of ring that is suitable for coding theory is given by finite Frobenius rings as it allow to formulate the dual code similar to finite fields. At the end we show some examples of code construction along with various results of quantum codes over finite Frobenius rings, especially codes over Zm.
Advisors/Committee Members: Klappenecker, Andreas (advisor), Bettati, Riccardo (committee member), Kish, Laszlo B. (committee member).
Subjects/Keywords: Quantum Computing; Frobenius Rings; Error correcting Codes; Codes over Rings; Stabilizer Codes; Non binary stabilizer codes
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sarma, A. (2012). Quantum codes over Finite Frobenius Rings. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11529
Chicago Manual of Style (16th Edition):
Sarma, Anurupa. “Quantum codes over Finite Frobenius Rings.” 2012. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11529.
MLA Handbook (7th Edition):
Sarma, Anurupa. “Quantum codes over Finite Frobenius Rings.” 2012. Web. 21 Apr 2021.
Vancouver:
Sarma A. Quantum codes over Finite Frobenius Rings. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11529.
Council of Science Editors:
Sarma A. Quantum codes over Finite Frobenius Rings. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11529

Texas A&M University
21.
Betts, Hutson.
Physical Resource Management and Access Mediation Within the Cloud Computing Paradigm.
Degree: MS, Computer Engineering, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11819
► Cloud computing has seen a surge over the past decade as corporations and institutions have sought to leverage the economies-of-scale achievable through this new computing…
(more)
▼ Cloud computing has seen a surge over the past decade as corporations and institutions have sought to leverage the economies-of-scale achievable through this new computing paradigm. However, the rapid adoptions of cloud computing technologies that implement the existing cloud computing paradigm threaten to undermine the long-term utility of the cloud model of computing. In this thesis we address how to accommodate the variety of access requirements and diverse hardware platforms of cloud computing users by developing extensions to the existing cloud computing paradigm that afford consumer-driven access requirements and integration of new physical hardware platforms.
Advisors/Committee Members: Bettati, Riccardo (advisor), Stoleru, Radu (committee member), Reddy, A. L. Narasimha (committee member).
Subjects/Keywords: Cloud Computing; Access Mediation; Resource Management; Scaling
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Betts, H. (2012). Physical Resource Management and Access Mediation Within the Cloud Computing Paradigm. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11819
Chicago Manual of Style (16th Edition):
Betts, Hutson. “Physical Resource Management and Access Mediation Within the Cloud Computing Paradigm.” 2012. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11819.
MLA Handbook (7th Edition):
Betts, Hutson. “Physical Resource Management and Access Mediation Within the Cloud Computing Paradigm.” 2012. Web. 21 Apr 2021.
Vancouver:
Betts H. Physical Resource Management and Access Mediation Within the Cloud Computing Paradigm. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11819.
Council of Science Editors:
Betts H. Physical Resource Management and Access Mediation Within the Cloud Computing Paradigm. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11819

Texas A&M University
22.
Shamsi, Zain Sarfaraz.
Scalable OS Fingerprinting: Classification Problems and Applications.
Degree: PhD, Computer Science, 2017, Texas A&M University
URL: http://hdl.handle.net/1969.1/161371
► The Internet has become ubiquitous in our lives today. With its rapid adoption and widespread growth across the planet, it has drawn many research efforts…
(more)
▼ The Internet has become ubiquitous in our lives today. With its rapid adoption and widespread growth across the planet, it has drawn many research efforts that attempt to understand and characterize this complex system. One such direction tries to discover the types of devices that compose the Internet, which is the topic of this dissertation.
To accomplish such a measurement, researchers have turned to a technique called OS fingerprinting, which is a method to determine the operating system (OS) of a remote host. However, because the Internet today has evolved into a massive public network, large-scale OS fingerprinting has become a challenging problem. Due to increasing security concerns, most networks today will block many of the probes used by traditional fingerprinting tools (e.g., Nmap), thus requiring a different approach. Consequently, this has given rise to single-probe techniques which offer low overhead and minimal intrusiveness, but in turn require more sophistication in their algorithms as they are limited in the amount of information that they receive and many parameters can inject noise in the measurement (e.g., network delay, packet loss).
This dissertation focuses on understanding the performance of single-probe algorithms. We study existing methods, formalize current problems in the field and devise new algorithms to improve classification accuracy and automate construction of fingerprint databases. We apply our work to multiple Internet-wide scans and discover that besides general purpose machines, the Internet today has grown to include large numbers of publicly accessible peripheral devices (e.g., routers, printers, cameras) and cyber-physical systems (e.g., lighting controllers, medical sensors). We go on to recover empirical distributions of network delays and loss, as well as likelihoods of users re-configuring their devices. With our developed techniques and results, we show that single-probe algorithms are an effective approach for accomplishing wide-scale network measurements.
Advisors/Committee Members: Loguinov, Dmitri (advisor), Bettati, Riccardo (committee member), Reddy, Narasimha (committee member), Stoleru, Radu (committee member).
Subjects/Keywords: OS fingerprinting; Internet measurement; network security; classification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shamsi, Z. S. (2017). Scalable OS Fingerprinting: Classification Problems and Applications. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/161371
Chicago Manual of Style (16th Edition):
Shamsi, Zain Sarfaraz. “Scalable OS Fingerprinting: Classification Problems and Applications.” 2017. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/161371.
MLA Handbook (7th Edition):
Shamsi, Zain Sarfaraz. “Scalable OS Fingerprinting: Classification Problems and Applications.” 2017. Web. 21 Apr 2021.
Vancouver:
Shamsi ZS. Scalable OS Fingerprinting: Classification Problems and Applications. [Internet] [Doctoral dissertation]. Texas A&M University; 2017. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/161371.
Council of Science Editors:
Shamsi ZS. Scalable OS Fingerprinting: Classification Problems and Applications. [Doctoral Dissertation]. Texas A&M University; 2017. Available from: http://hdl.handle.net/1969.1/161371

Texas A&M University
23.
Mohan, Suneil.
Hardware Architecture for Semantic Comparison.
Degree: PhD, Computer Engineering, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10938
► Semantic Routed Networks provide a superior infrastructure for complex search engines. In a Semantic Routed Network (SRN), the routers are the critical component and they…
(more)
▼ Semantic Routed Networks provide a superior infrastructure for complex search engines. In a Semantic Routed Network (SRN), the routers are the critical component and they perform semantic comparison as their key computation. As the amount of information available on the Internet grows, the speed and efficiency with which information can be retrieved to the user becomes important. Most current search engines scale to meet the growing demand by deploying large data centers with general purpose computers that consume many megawatts of power. Reducing the power consumption of these data centers while providing better performance, will help reduce the costs of operation significantly.
Performing operations in parallel is a key optimization step for better performance on general purpose CPUs. Current techniques for parallelization include architectures that are multi-core and have multiple thread handling capabilities. These coarse grained approaches have considerable resource management overhead and provide only sub-linear speedup.
This dissertation proposes techniques towards a highly parallel, power efficient architecture that performs semantic comparisons as its core activity. Hardware-centric parallel algorithms have been developed to populate the required data structures followed by computation of semantic similarity. The performance of the proposed design is further enhanced using a pipelined architecture. The proposed algorithms were also implemented on two contemporary platforms such as the Nvidia CUDA and an FPGA for performance comparison. In order to validate the designs, a semantic benchmark was also been created. It has been shown that a dedicated semantic comparator delivers significantly better performance compared to other platforms.
Results show that the proposed hardware semantic comparison architecture delivers a speedup performance of up to 10
5 while reducing power consumption by 80% compared to traditional computing platforms. Future research directions including better power optimization, architecting the complete semantic router and using the semantic benchmark for SRN research are also discussed.
Advisors/Committee Members: Mahapatra, Rabi (advisor), Walker, Duncan M. (committee member), Bettati, Riccardo (committee member), Kundur, Deepa (committee member).
Subjects/Keywords: Semantic Comparison; Semantic Networks; Hardware Architecture; Parallel Algorithms
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mohan, S. (2012). Hardware Architecture for Semantic Comparison. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10938
Chicago Manual of Style (16th Edition):
Mohan, Suneil. “Hardware Architecture for Semantic Comparison.” 2012. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10938.
MLA Handbook (7th Edition):
Mohan, Suneil. “Hardware Architecture for Semantic Comparison.” 2012. Web. 21 Apr 2021.
Vancouver:
Mohan S. Hardware Architecture for Semantic Comparison. [Internet] [Doctoral dissertation]. Texas A&M University; 2012. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10938.
Council of Science Editors:
Mohan S. Hardware Architecture for Semantic Comparison. [Doctoral Dissertation]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10938

Texas A&M University
24.
Juturu, Amruth Kumar.
Distributed Device Bus.
Degree: MS, Computer Science, 2015, Texas A&M University
URL: http://hdl.handle.net/1969.1/155660
► Peripheral devices are hardware components that are connected to a computer and they supplement the functionality of a computer. Over the years, a huge improvement…
(more)
▼ Peripheral devices are hardware components that are connected to a computer and they supplement the functionality of a computer. Over the years, a huge improvement has been observed both in variety and capabilities of peripheral devices. Starting from the input/output and storage devices of early days, today's peripheral devices support all aspects of a computer, with peripherals like Graphical Processing Units (GPUs) even supplementing the computational capabilities of a processor. At the same time, the support for peripheral devices in computers has vastly improved. While the earlier computers only supported static configuration of devices, the plug-and-play capabilities in present day computers allow devices to be added or removed at run time, thus reducing the complexity of managing peripheral devices. Today, it is not an exaggeration to state that, beyond the computational capability of a computer, it is the peripheral devices that define the user experience.
With the advancements in networking and distributed computing, the definition of what constitutes a computer has been blurred: Mainframes and Supercomputing clusters support batch processing, where processors/cores are treated as resources, and number of processors/cores available for a specific computation can be requested on demand. With cloud computing, users access services hosted across the Internet. However, usage models for peripheral devices have not caught up accordingly. For the most part, Peripheral devices are still limited to the computers they are physical attached to. Device virtualization solutions exist that can extend the device protocols over the network, enabling users to access devices connected to a different computer. However, these device virtualization solutions still need direct access to both the computer that has the device plugged in (Device Server) and to the computer that intends to use the device (Device Client) and they do not support remote plug-and-play. So, there is a need for a device consolidation framework that supports new device usage models that are in line with the evolving models of computation.
In this thesis, we propose a framework called "Distributed Device Bus", which extends the concept of a conventional peripheral bus to include in its scope, the ports of all the computers that are connected over a network. Like a peripheral bus, a Distributed Device Bus is also associated with a computer called Master node. A Distributed Device Bus supports dynamic addition/deletion of ports and each of these ports can physically belong to any computer in the network. Computers that contribute ports to a Distributed Device Bus are called Provider nodes. A device plugged into any port that is assigned to a Distributed Device Bus is immediately made accessible to applications on master node. This device consolidation framework treats devices as a resource and access to a device is configurable rather than being limited to the computer the device is physically attached to.
Advisors/Committee Members: Bettati, Riccardo (advisor), Reddy, A. L. Narasimha (committee member), Jarvi, Jaakko (committee member).
Subjects/Keywords: Distributed Device Bus; Device Virtualization; Device Consolidation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Juturu, A. K. (2015). Distributed Device Bus. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/155660
Chicago Manual of Style (16th Edition):
Juturu, Amruth Kumar. “Distributed Device Bus.” 2015. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/155660.
MLA Handbook (7th Edition):
Juturu, Amruth Kumar. “Distributed Device Bus.” 2015. Web. 21 Apr 2021.
Vancouver:
Juturu AK. Distributed Device Bus. [Internet] [Masters thesis]. Texas A&M University; 2015. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/155660.
Council of Science Editors:
Juturu AK. Distributed Device Bus. [Masters Thesis]. Texas A&M University; 2015. Available from: http://hdl.handle.net/1969.1/155660

Texas A&M University
25.
Pandey, Sanat Kumar.
Performance Comparison of Neighbor Discovery Protocols in Wireless Ad-Hoc Network.
Degree: MS, Computer Engineering, 2015, Texas A&M University
URL: http://hdl.handle.net/1969.1/155754
► In this thesis we consider the problem of neighbor discovery in synchronous single hop wireless ad-hoc networks. The central problem is to establish a broadcasting…
(more)
▼ In this thesis we consider the problem of neighbor discovery in synchronous single hop wireless ad-hoc networks. The central problem is to establish a broadcasting sequence such that only one transmitter broadcasts at a time while all others listen and every transmitter in the network gets at least one chance to broadcast. We consider the question: how fast we can achieve neighbor discovery with k nodes in the network, each having a unique id assigned from an id space much larger than k in radio communication models with and without collision detection. We take the simulation route to answer this question. We implemented one randomized and two deterministic algorithms for neighbor discovery and compared their performance in terms of number of rounds required as a function of the number of nodes in the network and the size of the space from which ids are chosen. Our simulation results show that the randomized algorithm is most efficient and is easiest to implement. The deterministic algorithm for the no collision detection model has round complexity comparable to the size of the id space and is orders of magnitude less efficient than the randomized algorithm. The deterministic algorithm for the collision detection model is slower than the randomized algorithm by a factor of log(n), where n is the size of the id space. Our analysis would be useful for choosing optimal algorithms for field applications depending on the radio communication model and network topology. It will reveal any large constants or second order terms discarded in the asymptotic analysis of the algorithms, which reduces effectiveness of some algorithms in applications.
Advisors/Committee Members: Welch, Jennifer L (advisor), Sprintson, Alex (committee member), Bettati, Riccardo (committee member).
Subjects/Keywords: Neighbor discovery; ad-hoc radio network
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Pandey, S. K. (2015). Performance Comparison of Neighbor Discovery Protocols in Wireless Ad-Hoc Network. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/155754
Chicago Manual of Style (16th Edition):
Pandey, Sanat Kumar. “Performance Comparison of Neighbor Discovery Protocols in Wireless Ad-Hoc Network.” 2015. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/155754.
MLA Handbook (7th Edition):
Pandey, Sanat Kumar. “Performance Comparison of Neighbor Discovery Protocols in Wireless Ad-Hoc Network.” 2015. Web. 21 Apr 2021.
Vancouver:
Pandey SK. Performance Comparison of Neighbor Discovery Protocols in Wireless Ad-Hoc Network. [Internet] [Masters thesis]. Texas A&M University; 2015. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/155754.
Council of Science Editors:
Pandey SK. Performance Comparison of Neighbor Discovery Protocols in Wireless Ad-Hoc Network. [Masters Thesis]. Texas A&M University; 2015. Available from: http://hdl.handle.net/1969.1/155754

Texas A&M University
26.
Wilcox, Eric Scott.
A Relay Prevention Technique for Near Field Communication.
Degree: MS, Computer Science, 2015, Texas A&M University
URL: http://hdl.handle.net/1969.1/155758
► The use of near field communication (NFC) has expanded as rapidly as Bluetooth or similar technologies and shows no signs of slowing down. It is…
(more)
▼ The use of near field communication (NFC) has expanded as rapidly as Bluetooth or similar technologies and shows no signs of slowing down. It is used in many different systems such as contactless payment processing, movie posters, security access and passport identification. NFC enabled devices include cell phones, credit cards and key chains. With the spread of any new technology come security vulnerabilities that malicious users will try to exploit. NFC is particularly vulnerable to what is known as a relay attack. The relay attack is similar to the man-in-the-middle attack but the data need not be unencrypted to be vulnerable. The relay attack is currently undetectable and unstoppable. Many solutions have been proposed but no real-world solution has been found that does not require significant changes to the NFC protocol, or even the hardware. In this work we propose a technique that uses careful timing analysis of tag communication to identify a transaction as dangerous and thus set off an alert of the potential threat. This could be built into mobile devices and readers already deployed and provide a level of security to the market not currently available while maintaining the protocols set forth by the ISO. A proof of concept has been built and tested on custom hardware as well as on an Android Nexus 4 to detect and prevent the relay attack. In this thesis we give an overview of security issues in NFC communication, describe the relay attack in detail, present our timing based countermeasures and its implementation, and give results of our evaluation of timing based relay prevention.
Advisors/Committee Members: Bettati, Riccardo (advisor), Stoleru, Radu (advisor), Reddy, A. L. Narasimha (committee member).
Subjects/Keywords: NFC; Near Field Communication; Relay Attack; Relay Attack Prevention
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wilcox, E. S. (2015). A Relay Prevention Technique for Near Field Communication. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/155758
Chicago Manual of Style (16th Edition):
Wilcox, Eric Scott. “A Relay Prevention Technique for Near Field Communication.” 2015. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/155758.
MLA Handbook (7th Edition):
Wilcox, Eric Scott. “A Relay Prevention Technique for Near Field Communication.” 2015. Web. 21 Apr 2021.
Vancouver:
Wilcox ES. A Relay Prevention Technique for Near Field Communication. [Internet] [Masters thesis]. Texas A&M University; 2015. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/155758.
Council of Science Editors:
Wilcox ES. A Relay Prevention Technique for Near Field Communication. [Masters Thesis]. Texas A&M University; 2015. Available from: http://hdl.handle.net/1969.1/155758

Texas A&M University
27.
Yang, Chen.
A COMMUNICATION FRAMEWORK FOR OPPORTUNISTIC MOBILE NETWORKS WITH DIVERSE CONNECTIVITY.
Degree: PhD, Computer Engineering, 2018, Texas A&M University
URL: http://hdl.handle.net/1969.1/173495
► An Opportunistic Mobile Network (OMN) refers to the network paradigm where wireless devices communicate with each other through the opportunistically formed wireless links. Routing in…
(more)
▼ An Opportunistic Mobile Network (OMN) refers to the network paradigm where
wireless devices communicate with each other through the opportunistically formed
wireless links. Routing in OMN relies on node mobility and the store-and-forward
mechanism. It is paramount to have energy efficient, robust and cost effective routing
protocols in such environments. Previous research usually assumes that the
connectivity in such networks is extremely sparse and that the network is purely
infrastructure-less. However, real world deployments of OMNs actually exhibit di-
verse connectivity, i.e., connectivity may range from sparsely connected to well connected
or the network may coexist with infrastructure. Consequently, the simplified
assumptions of previous solutions lead to suboptimal behaviors of routing protocols,
which includes redundant transmissions, too much or insufficient data replications,
poor forwarding decisions, etc.
In this dissertation, in order to address the aforementioned problems, we propose
a communication framework for OMNs with diverse connectivity, which consists of
a series of algorithms and protocols that aim to provide energy efficient, robust and
cost-aware communication services to applications. In this framework, we propose:
a) algorithms that carefully schedule transmissions in an opportunistic contact involving
multiple nodes; b) routing protocols that consider simultaneously mobile
nodes' delivery capability and traffic load; c) mathematical tools that characterize
not only Inter-Contact Times but also their correlations; d) adaptive mechanisms to
realize dynamic data replication; and e) forwarding strategies that optimally trade-o_
energy consumption and delay in a cost-aware fashion when utilizing infrastructure.
We evaluate the proposed routing protocols and algorithms through extensive simulations using both synthetic network models and real world mobility traces. We also
conduct real world experiments on a wireless testbed to demonstrate their practicability.
The evaluation results show that, with the assumption of diverse connectivity
in mind, the proposed algorithms and protocols greatly improve the networking performance
and efficiency. The consideration of delay correlations and a mechanism
for dynamic replication are critical for a routing protocol to perform well with a wide
range of network connectivity. When infrastructure is present, our proposed forwarding
strategy helps improve the energy-delay trade-off when cost is a constraint.
Advisors/Committee Members: Stoleru, Radu (advisor), Bettati, Riccardo (committee member), Jiang, Anxiao (committee member), Sprintson, Alexander (committee member).
Subjects/Keywords: Opportunistic Mobile Networks; Routing Protocol; Wireless Networks; Mobile Data Offloading
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yang, C. (2018). A COMMUNICATION FRAMEWORK FOR OPPORTUNISTIC MOBILE NETWORKS WITH DIVERSE CONNECTIVITY. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/173495
Chicago Manual of Style (16th Edition):
Yang, Chen. “A COMMUNICATION FRAMEWORK FOR OPPORTUNISTIC MOBILE NETWORKS WITH DIVERSE CONNECTIVITY.” 2018. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/173495.
MLA Handbook (7th Edition):
Yang, Chen. “A COMMUNICATION FRAMEWORK FOR OPPORTUNISTIC MOBILE NETWORKS WITH DIVERSE CONNECTIVITY.” 2018. Web. 21 Apr 2021.
Vancouver:
Yang C. A COMMUNICATION FRAMEWORK FOR OPPORTUNISTIC MOBILE NETWORKS WITH DIVERSE CONNECTIVITY. [Internet] [Doctoral dissertation]. Texas A&M University; 2018. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/173495.
Council of Science Editors:
Yang C. A COMMUNICATION FRAMEWORK FOR OPPORTUNISTIC MOBILE NETWORKS WITH DIVERSE CONNECTIVITY. [Doctoral Dissertation]. Texas A&M University; 2018. Available from: http://hdl.handle.net/1969.1/173495

Texas A&M University
28.
Eigenbrodt, Julia M.
Spent Fuel Measurements: Passive Neutron Albedo Reactivity (PNAR) and Photon Signatures.
Degree: PhD, Nuclear Engineering, 2016, Texas A&M University
URL: http://hdl.handle.net/1969.1/156845
► The International Atomic Energy Agency’s (IAEA) safeguards technical objective is the timely detection of a diversion of a significant quantity of nuclear material from peaceful…
(more)
▼ The International Atomic Energy Agency’s (IAEA) safeguards technical objective is the timely detection of a diversion of a significant quantity of nuclear material from peaceful activities to the manufacture of nuclear weapons or of other nuclear explosive devices or for purposes unknown, and deterrence of such diversion by the risk of early detection. An important IAEA task towards meeting this objective is the ability to accurately and reliably measure spent nuclear fuel (SNF) to verify reactor operating parameters and verify that the fuel has not been removed from reactors or SNF storage facilities. This dissertation analyzes a method to improve the state-of-the-art of nuclear material safeguards measurements using two combined measurement techniques: passive neutron albedo reactivity (PNAR) and passive spectral photon measurements.
PNAR was used for measurements of SNF in Japan as well as fresh fuel pins at Los Alamos National Laboratory (LANL). The measured PNAR signal was shown to trend well with neutron multiplication and fissile content of the SNF. The PNAR measurements showed a 73% change in signal for a fuel burnup range of 7.1 to 19.2 GWd/MTHM of spent mixed-oxide (MOX) fuel and a 40% change in signal over a range of initial
235U enrichment from 0.2% to 3.2% in UO2 fuel.
Photon measurements were performed on a wide range of SNF pins to determine which photon signatures are visible in different sets of fuels. These signatures were then investigated and tested using a sensitivity analysis to determine the spent fuel parameters to which each signal is most sensitive. These photon signatures can be used to determine SNF parameters that can support PNAR determination of SNF fissile content.
Based on the results from these measurements, we have concluded that spectral photon measurements can determine operating parameters to improve the implementation of PNAR. We also concluded that PNAR can accurately measure multiplication and fissile content in SNF with standard deviations of 1% and 4%, respectively. The PNAR and photon measurements can be used together as a powerful tool to support the IAEA safeguards technical objective.
Advisors/Committee Members: Charlton, William S (advisor), Chirayath, Sunil (committee member), Tsvetkov, Pavel (committee member), Bettati, Riccardo (committee member).
Subjects/Keywords: PNAR; NDA; safeguards; nuclear engineering; spent nuclear fuel; photon measurements; neutron measurements
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Eigenbrodt, J. M. (2016). Spent Fuel Measurements: Passive Neutron Albedo Reactivity (PNAR) and Photon Signatures. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/156845
Chicago Manual of Style (16th Edition):
Eigenbrodt, Julia M. “Spent Fuel Measurements: Passive Neutron Albedo Reactivity (PNAR) and Photon Signatures.” 2016. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/156845.
MLA Handbook (7th Edition):
Eigenbrodt, Julia M. “Spent Fuel Measurements: Passive Neutron Albedo Reactivity (PNAR) and Photon Signatures.” 2016. Web. 21 Apr 2021.
Vancouver:
Eigenbrodt JM. Spent Fuel Measurements: Passive Neutron Albedo Reactivity (PNAR) and Photon Signatures. [Internet] [Doctoral dissertation]. Texas A&M University; 2016. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/156845.
Council of Science Editors:
Eigenbrodt JM. Spent Fuel Measurements: Passive Neutron Albedo Reactivity (PNAR) and Photon Signatures. [Doctoral Dissertation]. Texas A&M University; 2016. Available from: http://hdl.handle.net/1969.1/156845

Texas A&M University
29.
Huang, Lin.
Using Imprecise Computing for Improved Real-Time Scheduling.
Degree: PhD, Computer Engineering, 2019, Texas A&M University
URL: http://hdl.handle.net/1969.1/188748
► Conventional hard real-time scheduling is often overly pessimistic due to the worst case execution time estimation. The pessimism can be mitigated by exploiting imprecise computing…
(more)
▼ Conventional hard real-time scheduling is often overly pessimistic due to the worst case execution time estimation. The pessimism can be mitigated by exploiting imprecise computing in applications where occasional small errors are acceptable. This leverage is investigated in a few previous works, which are restricted to preemptive cases. We study how to make use of imprecise computing in uniprocessor non-preemptive real-time scheduling, which is known to be more difficult than its preemptive counterpart. Several heuristic algorithms are developed for periodic tasks with independent or cumulative errors due to imprecision. Simulation results show that the proposed techniques can significantly improve task schedulability and achieve desired accuracy– schedulability tradeoff. The benefit of considering imprecise computing is further confirmed by a prototyping implementation in Linux system.
Mixed-criticality system is a popular model for reducing pessimism in real-time scheduling while providing guarantee for critical tasks in presence of unexpected overrun. However, it is controversial due to some drawbacks. First, all low-criticality tasks are dropped in high-criticality mode, although they are still needed. Second, a single high-criticality job overrun leads to the pessimistic high-criticality mode for all high-criticality tasks and consequently resource utilization becomes inefficient. We attempt to tackle aforementioned two limitations of mixed-criticality system simultaneously in multiprocessor scheduling, while those two issues are mostly focused on uniprocessor scheduling in several recent works. We study how to achieve graceful degradation of low-criticality tasks by continuing their executions with imprecise computing or even precise computing if there is sufficient utilization slack. Schedulability conditions under this Variable-Precision Mixed-Criticality (VPMC) system model are investigated for partitioned scheduling and global fpEDF-VD scheduling. And a deferred switching protocol is introduced so that the chance of switching to high-criticality mode is significantly reduced. Moreover, we develop a precision optimization approach that maximizes precise computing of low-criticality tasks through 0-1 knapsack formulation. Experiments are performed through both software simulations and Linux proto- typing with consideration of overhead. Schedulability of the proposed methods is studied so that the Quality-of-Service for low-criticality tasks is improved with guarantee of satisfying all deadline constraints. The proposed precision optimization can largely reduce computing errors compared to constantly executing low-criticality tasks with imprecise computing in high-criticality mode.
Advisors/Committee Members: Hu, Jiang (advisor), Shi, Weiping (committee member), Bettati, Riccardo (committee member), Xie, Le (committee member).
Subjects/Keywords: real-time scheduling; imprecise computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Huang, L. (2019). Using Imprecise Computing for Improved Real-Time Scheduling. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/188748
Chicago Manual of Style (16th Edition):
Huang, Lin. “Using Imprecise Computing for Improved Real-Time Scheduling.” 2019. Doctoral Dissertation, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/188748.
MLA Handbook (7th Edition):
Huang, Lin. “Using Imprecise Computing for Improved Real-Time Scheduling.” 2019. Web. 21 Apr 2021.
Vancouver:
Huang L. Using Imprecise Computing for Improved Real-Time Scheduling. [Internet] [Doctoral dissertation]. Texas A&M University; 2019. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/188748.
Council of Science Editors:
Huang L. Using Imprecise Computing for Improved Real-Time Scheduling. [Doctoral Dissertation]. Texas A&M University; 2019. Available from: http://hdl.handle.net/1969.1/188748

Texas A&M University
30.
Wu, Chih-Peng.
CONTAINER MANAGEMENT FOR SERVERLESS EDGE COMPUTING OFFERINGS.
Degree: MS, Computer Science, 2019, Texas A&M University
URL: http://hdl.handle.net/1969.1/188813
► Under the serverless paradigm, containers may serve as the runtime execution environments for processing clients’ service requests. For service providers aiming at broad customer bases,…
(more)
▼ Under the serverless paradigm, containers may serve as the runtime execution environments for processing clients’ service requests. For service providers aiming at broad customer bases, the portfolio of containers to be made available can be quite large. In edge computing scenarios, where hardware elasticity is limited or nonexistent, an effective method for container provisioning and destroying is crucial to increase service availability and mitigate startup overheads.
However, current methods have not been designed for the Internet-of-Things (IoT) applications – one major use case in edge computing. In this work, we introduce a new container management method that exploits predictable patterns present in the workload to decrease request latency in such environments. We propose a new container management method, called Look-Ahead Request Serving (LARS), designed for IoT applications that exhibit periodicity. We demonstrate that for workloads that invoke requests periodically (e.g., environmental sensors, surveillance cameras, smart home gadgets), our method outperforms the method in OpenWhisk, an open-source serverless platform, attaining a 37% and 78% improvement in the startup overhead in a smart gym and a smart home scenario, respectively
Advisors/Committee Members: Da Silva, Dilma (advisor), Bettati, Riccardo (committee member), Gautam, Natarajan (committee member).
Subjects/Keywords: serverless; docker
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wu, C. (2019). CONTAINER MANAGEMENT FOR SERVERLESS EDGE COMPUTING OFFERINGS. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/188813
Chicago Manual of Style (16th Edition):
Wu, Chih-Peng. “CONTAINER MANAGEMENT FOR SERVERLESS EDGE COMPUTING OFFERINGS.” 2019. Masters Thesis, Texas A&M University. Accessed April 21, 2021.
http://hdl.handle.net/1969.1/188813.
MLA Handbook (7th Edition):
Wu, Chih-Peng. “CONTAINER MANAGEMENT FOR SERVERLESS EDGE COMPUTING OFFERINGS.” 2019. Web. 21 Apr 2021.
Vancouver:
Wu C. CONTAINER MANAGEMENT FOR SERVERLESS EDGE COMPUTING OFFERINGS. [Internet] [Masters thesis]. Texas A&M University; 2019. [cited 2021 Apr 21].
Available from: http://hdl.handle.net/1969.1/188813.
Council of Science Editors:
Wu C. CONTAINER MANAGEMENT FOR SERVERLESS EDGE COMPUTING OFFERINGS. [Masters Thesis]. Texas A&M University; 2019. Available from: http://hdl.handle.net/1969.1/188813
◁ [1] [2] [3] [4] ▶
.