You searched for +publisher:"Colorado State University" +contributor:("Malaiya, Yashwant")
.
Showing records 1 – 25 of
25 total matches.

Colorado State University
1.
Puranik, Mugdha.
Optimal design space exploration for FPGA-based accelerators: a case study on 1-D FDTD.
Degree: MS(M.S.), Electrical and Computer Engineering, 2015, Colorado State University
URL: http://hdl.handle.net/10217/170328
► Hardware accelerators are optimized functional blocks designed to offload specific tasks from the CPU, speed up them up and reduce their dynamic power consumption. It…
(more)
▼ Hardware accelerators are optimized functional blocks designed to offload specific tasks from the CPU, speed up them up and reduce their dynamic power consumption. It is important to develop a methodology to efficiently implement critical algorithms on the hardware accelerator and do systematic design space exploration to identify optimal designs. In this thesis, we design, as a case study, a hardware accelerator for the 1-D Finite Difference Time Domain (FDTD) algorithm, a compute intensive technique for modeling electromagnetic behavior. Memory limitations and bandwidth constraints result in long run times on large problems. Hence, an approach which increases the speed of the FDTD method and reduces bandwidth requirement is necessary. To achieve this, we design an FPGA based hardware accelerator. We implement the accelerator based on time-space tiling. In our design, p processing elements (PEs) execute p parallelogram shaped tiles in parallel, each of which constitutes one tile pass. Our design uses a small amount of redundant computation to enable all PEs to start "nearly" concurrently, thereby fully exploiting the available parallelism. A further optimization allows us to reduce the main memory data transfers of this design by a factor of two. These optimizations are integrated in hardware, and implemented in Verilog in Altera's Quartus II, yielding a PE that delivers a throughput of one "iteration (i.e., two results) per cycle". To explore the feasible design space systematically, we formulate an optimization problem with the objective of minimizing the total execution time for given resource constraints. We solve the optimization problem analytically, and therefore have a provably optimal design in the feasible space. We also observe that for different problem sizes reveal that the optimal design may not always match the common sense intuition.
Advisors/Committee Members: Rajopadhye, Sanjay (advisor), Pasricha, Sudeep (committee member), Malaiya, Yashwant (committee member).
Subjects/Keywords: hardware accelerators; stencil computations
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Puranik, M. (2015). Optimal design space exploration for FPGA-based accelerators: a case study on 1-D FDTD. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/170328
Chicago Manual of Style (16th Edition):
Puranik, Mugdha. “Optimal design space exploration for FPGA-based accelerators: a case study on 1-D FDTD.” 2015. Masters Thesis, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/170328.
MLA Handbook (7th Edition):
Puranik, Mugdha. “Optimal design space exploration for FPGA-based accelerators: a case study on 1-D FDTD.” 2015. Web. 08 Mar 2021.
Vancouver:
Puranik M. Optimal design space exploration for FPGA-based accelerators: a case study on 1-D FDTD. [Internet] [Masters thesis]. Colorado State University; 2015. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/170328.
Council of Science Editors:
Puranik M. Optimal design space exploration for FPGA-based accelerators: a case study on 1-D FDTD. [Masters Thesis]. Colorado State University; 2015. Available from: http://hdl.handle.net/10217/170328

Colorado State University
2.
Mahindre, Gunjan S.
Coordinate repair and medial axis detection in virtual coordinate based sensor networks.
Degree: MS(M.S.), Electrical and Computer Engineering, 2014, Colorado State University
URL: http://hdl.handle.net/10217/88571
► Wireless Sensor Networks (WSNs) perform several operations like routing, topology extraction, data storage and data processing that depend on the efficiency of the localization scheme…
(more)
▼ Wireless Sensor Networks (WSNs) perform several operations like routing, topology extraction, data storage and data processing that depend on the efficiency of the localization scheme deployed in the network. Thus, WSNs need to be equipped with a good localization scheme as the addressing scheme affects the performance of the system as a whole. There are geographical as well as Virtual Coordinate Systems (VCS) for WSN localization. Although Virtual Coordinate (VC) based algorithms work well after system establishment, they are hampered by events such as node failure and link failure which are unpredictable and inevitable in WSNs where sensor nodes can have only a limited amount of energy to be used. This degrades the performance of algorithms and reduces the overall life of the network. WSNs, today, need a method to recover from such node failures at its foundation level and maintain its performance of various functions despite node failure events. The main focus of this thesis is preserving performance of virtual coordinate based algorithms in the presence of node failure. WSNs are subject to changes even during their operation. This implies that topology of the sensor networks can change dynamically throughout its life time. Knowing the shape, size and variations in the network topology helps to repair the algorithm better. Being centrally located in the network, medial nodes of a network provides us with information such as width of the network at a particular cross-section and distance of network nodes from boundary nodes. This information can be used as a foundation for applications such as network segmentation, VC system implementation, routing scheme implementation, topology extraction and efficient data storage and recovery. We propose a new approach for medial axis extraction in sensor networks. This distributed algorithm is very flexible with respect to the network shape and size. The main advantage of the algorithm is that, unlike existing algorithms, it works for networks with low node degrees. An algorithm for repairing VCS when network nodes fail is presented that eliminates the need for VC regeneration. This helps maintain efficient performance for all network sizes. The system performance degrades at higher node failure percentages with respect to the network size but the degradation is not abrupt and the system maintains a graceful degradation despite sudden node failure patterns. A hierarchical virtual coordinate system is proposed and evaluated for its response to network events like routing and node failures. We were also able to extract medial axis for various networks with the presented medial axis detection scheme. The networks used for testing fall under a range of shapes and an average node degree from 3 to 8. Discussions over the VC repair algorithm and the novel medial axis extraction scheme provide an insight into the nature of proposed schemes. We evaluate the scope and limitations for VCS repair algorithm and medial axis detection scheme. Performance of the VC repair algorithm in a…
Advisors/Committee Members: Jayasumana, Anura (advisor), Luo, J. Rockey (committee member), Malaiya, Yashwant (committee member).
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mahindre, G. S. (2014). Coordinate repair and medial axis detection in virtual coordinate based sensor networks. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/88571
Chicago Manual of Style (16th Edition):
Mahindre, Gunjan S. “Coordinate repair and medial axis detection in virtual coordinate based sensor networks.” 2014. Masters Thesis, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/88571.
MLA Handbook (7th Edition):
Mahindre, Gunjan S. “Coordinate repair and medial axis detection in virtual coordinate based sensor networks.” 2014. Web. 08 Mar 2021.
Vancouver:
Mahindre GS. Coordinate repair and medial axis detection in virtual coordinate based sensor networks. [Internet] [Masters thesis]. Colorado State University; 2014. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/88571.
Council of Science Editors:
Mahindre GS. Coordinate repair and medial axis detection in virtual coordinate based sensor networks. [Masters Thesis]. Colorado State University; 2014. Available from: http://hdl.handle.net/10217/88571

Colorado State University
3.
He, Xin.
Outlier detection approach for PCB testing based on Principal Component Analysis, An.
Degree: MS(M.S.), Electrical and Computer Engineering, 2011, Colorado State University
URL: http://hdl.handle.net/10217/47315
► Capacitive Lead Frame Testing, a widely used approach for printed circuit board testing, is very effective for open solder detection. The approach, however, is affected…
(more)
▼ Capacitive Lead Frame Testing, a widely used approach for printed circuit board testing, is very effective for open solder detection. The approach, however, is affected by mechanical variations during testing and by tolerances of electrical parameters of components, making it difficult to use threshold based techniques for defect detection. A novel approach is presented in this thesis for identifying boardruns that are likely to be outliers. Based on Principal Components Analysis (PCA), this approach treats the set of capacitance measurements of individual connectors or sockets in a holistic manner to overcome the measurement and component parameter variations inherent in test data. Effectiveness of the method is evaluated using measurements on different types of boards. Based on multiple analyses of different measurement datasets, the most suitable statistics for outlier detection and relative parameter values are also identified. Enhancements to the PCA-based technique using the concept of test-pin windows are presented to increase the resolution of the analysis. When applied to one test window at a time, PCA is able to detect the physical position of potential defects. Combining the basic and enhanced techniques, the effectiveness of outlier detection is improved. The PCA based approach is extended to detect and compensate for systematic variation of measurement data caused by tilt or shift of the sense plate. This scheme promises to enhance the accuracy of outlier detection when measurements are from different fixtures. Compensation approaches are introduced to correct the 'abnormal' measurements due to sense-plate variations to a 'normal' and consistent baseline. The effectiveness of this approach in the presence of the two common forms of mechanical variations is illustrated. Potential to use PCA based analysis to estimate the relative amount of tilt and shift in sense plate is demonstrated.
Advisors/Committee Members: Jayasumana, Anura P. (advisor), Malaiya, Yashwant K. (committee member), Reising, Steven C. (committee member).
Subjects/Keywords: board; principal component analysis; open defect; CDF
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
He, X. (2011). Outlier detection approach for PCB testing based on Principal Component Analysis, An. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/47315
Chicago Manual of Style (16th Edition):
He, Xin. “Outlier detection approach for PCB testing based on Principal Component Analysis, An.” 2011. Masters Thesis, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/47315.
MLA Handbook (7th Edition):
He, Xin. “Outlier detection approach for PCB testing based on Principal Component Analysis, An.” 2011. Web. 08 Mar 2021.
Vancouver:
He X. Outlier detection approach for PCB testing based on Principal Component Analysis, An. [Internet] [Masters thesis]. Colorado State University; 2011. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/47315.
Council of Science Editors:
He X. Outlier detection approach for PCB testing based on Principal Component Analysis, An. [Masters Thesis]. Colorado State University; 2011. Available from: http://hdl.handle.net/10217/47315

Colorado State University
4.
Algarni, Abdullah Mahdi.
Quantitative economics of security: software vulnerabilities and data breaches.
Degree: PhD, Computer Science, 2016, Colorado State University
URL: http://hdl.handle.net/10217/176634
► Security vulnerabilities can represent enormous risks to society and business organizations. A large percentage of vulnerabilities in software are discovered by individuals external to the…
(more)
▼ Security vulnerabilities can represent enormous risks to society and business organizations. A large percentage of vulnerabilities in software are discovered by individuals external to the developing organization. These vulnerabilities are often exchanged for monetary rewards or a negotiated selling price, giving rise to vulnerability markets. Some of these markets are regulated, while some are unregulated. Many buyers in the unregulated markets include individuals, groups, or government organizations who intend to use the vulnerabilities for potential attacks. Vulnerabilities traded through such markets can cause great economic, organizational, and national security risks. Vulnerability markets can reduce risks if the vulnerabilities are acquitted and remedied by the software developers. Studying vulnerability markets and their related issues will provide an insight into their underlying mechanisms, which can be used to assess the risks and develop approaches for reducing and mitigating the potential risks to enhance the security against the data breaches. Some of the aspects of vulnerability—discovery, dissemination, and disclosure—have received some recent attention. However, the role of interaction among the vulnerability discoverers and vulnerability acquirers has not yet been adequately addressed. This dissertation suggests that a major fraction of discoverers, a majority in some cases, are unaffiliated with the software developers and thus are free to disseminate the vulnerabilities they discover in any way they like. As a result, multiple vulnerability markets have emerged. In recent vulnerability discovery literature, the vulnerability discoverers have remained anonymous. Although there has been an attempt to model the level of their efforts, information regarding their identities, modes of operation, and what they are doing with the discovered vulnerabilities has not been explored. Reports of buying and selling the vulnerabilities are now appearing in the press; however, the nature of the actual vulnerability markets needs to be analyzed. We have attempted to collect detailed information. We have identified the most prolific vulnerability discoverers throughout the past decade and examined their motivation and methods. A large percentage of these discoverers are located outside of the US. We have contacted several of the most prolific discoverers in order to collect firsthand information regarding their techniques, motivations, and involvement in the vulnerability markets. We examine why many of the discoverers appear to retire after a highly successful vulnerability-finding career. We found that the discoverers had enough experience and good reputation to work officially with a good salary in some well- known software development companies. Many security breaches have been reported in the past few years, impacting both large and small organizations. Such breaches may occur through the exploitation of system vulnerabilities. There has been considerable disagreement about the overall cost and probability…
Advisors/Committee Members: Malaiya, Yashwant K. (advisor), Ray, Indrakshi (committee member), Ray, Indrajit (committee member), Kling, Robert (committee member).
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Algarni, A. M. (2016). Quantitative economics of security: software vulnerabilities and data breaches. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/176634
Chicago Manual of Style (16th Edition):
Algarni, Abdullah Mahdi. “Quantitative economics of security: software vulnerabilities and data breaches.” 2016. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/176634.
MLA Handbook (7th Edition):
Algarni, Abdullah Mahdi. “Quantitative economics of security: software vulnerabilities and data breaches.” 2016. Web. 08 Mar 2021.
Vancouver:
Algarni AM. Quantitative economics of security: software vulnerabilities and data breaches. [Internet] [Doctoral dissertation]. Colorado State University; 2016. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/176634.
Council of Science Editors:
Algarni AM. Quantitative economics of security: software vulnerabilities and data breaches. [Doctoral Dissertation]. Colorado State University; 2016. Available from: http://hdl.handle.net/10217/176634

Colorado State University
5.
Mandyam Narasiodeyar, Raghunandan.
Impact of resequencing buffer distribution on packet reordering.
Degree: MS(M.S.), Electrical and Computer Engineering, 2011, Colorado State University
URL: http://hdl.handle.net/10217/47296
► Packet reordering in Internet has become an unavoidable phenomenon wherein packets get displaced during transmission resulting in out of order packets at the destination. Resequencing…
(more)
▼ Packet reordering in Internet has become an unavoidable phenomenon wherein packets get displaced during transmission resulting in out of order packets at the destination. Resequencing buffers are used at the end nodes to recover from packet reordering. This thesis presents analytical estimation methods for "Reorder Density" (RD) and "Reorder Buffer occupancy Density" (RBD) that are metrics of packet reordering, of packet sequences as they traverse through resequencing nodes with limited buffers. During the analysis, a "Lowest First Resequencing Algorithm" is defined and used in individual nodes to resequence packets back into order. The results are obtained by studying the patterns of sequences as they traverse through resequencing nodes. The estimations of RD and RBD are found to vary for sequences containing different types of packet reordering patterns such as Independent Reordering, Embedded Reordering and Overlapped Reordering. Therefore, multiple estimations in the form of theorems catering to different reordering patterns are presented. The proposed estimation models assist in the allocation of resources across intermediate network elements to mitigate the effect of packet reordering. Theorems to derive RBD from RD when only RD is available are also presented. Just like the resequencing estimation models, effective RBD for a given RD are also found to vary for different packet reordering patterns, therefore, multiple theorems catering to different patterns are presented. Such RBD estimations would be useful for allocating resources based on certain QoS criteria wherein one of the metrics is RD. Simulations driven by Internet measurement traces and random sequences are used to verify the analytical results. Since high degree of packet reordering is known to affect the quality of applications using TCP and UDP on the Internet, this study has broad applicability in the area of mobile communication and networks.
Advisors/Committee Members: Jayasumana, Anura P. (advisor), Malaiya, Yashwant K. (committee member), Pasricha, Sudeep (committee member).
Subjects/Keywords: buffer; resequencing; packet reordering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mandyam Narasiodeyar, R. (2011). Impact of resequencing buffer distribution on packet reordering. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/47296
Chicago Manual of Style (16th Edition):
Mandyam Narasiodeyar, Raghunandan. “Impact of resequencing buffer distribution on packet reordering.” 2011. Masters Thesis, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/47296.
MLA Handbook (7th Edition):
Mandyam Narasiodeyar, Raghunandan. “Impact of resequencing buffer distribution on packet reordering.” 2011. Web. 08 Mar 2021.
Vancouver:
Mandyam Narasiodeyar R. Impact of resequencing buffer distribution on packet reordering. [Internet] [Masters thesis]. Colorado State University; 2011. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/47296.
Council of Science Editors:
Mandyam Narasiodeyar R. Impact of resequencing buffer distribution on packet reordering. [Masters Thesis]. Colorado State University; 2011. Available from: http://hdl.handle.net/10217/47296

Colorado State University
6.
Pendharkar, Gayatri Arun.
Topology inference of Smart Fabric grids - a virtual coordinate based approach.
Degree: MS(M.S.), Electrical and Computer Engineering, 2020, Colorado State University
URL: http://hdl.handle.net/10217/208450
► Driven by increasing potency and decreasing cost/size of the electronic devices capable of sensing, actuating, processing and wirelessly communicating, the Internet of Things (IoT) is…
(more)
▼ Driven by increasing potency and decreasing cost/size of the electronic devices capable of sensing, actuating, processing and wirelessly communicating, the Internet of Things (IoT) is expanding into manufacturing plants, complex structures, and harsh environments with the potential to impact the way we live and work. Subnets of simple devices ranging from smart RFIDs to tiny sensors/actuators deployed in massive numbers forming complex 2-D surfaces, manifolds and complex 3-D physical spaces and fabrics will be a key constituent of this infrastructure. Smart Fabrics (SFs) are emerging with embedded IoT devices that have the ability to do things that traditional fabrics cannot, including sensing, storing, communicating, transforming data, and harvesting and conducting energy. These SFs are expected to have a wide range of applications in the near future in health monitoring, space stations, commercial building rooftops and more. With this innovative Smart Fabric technology at hand, there is a need to create algorithms for programming the smart nodes to facilitate communication, monitoring, and data routing within the fabric. Automatically detecting the location, shape, and other such physical characteristics will be essential but without resorting to localization techniques such as Global Positioning System (GPS), the size and cost of which may not be acceptable for many large-scale applications. Measuring the physical distances and obtaining geographical coordinates becomes infeasible for many IoT networks, particularly those deployed in harsh and complex environments. In SFs, the proximity between the nodes makes it impossible to deploy technology like GPS or Received Signal Strength Indicator (RSSI) for distance estimation. This thesis devises a Virtual Coordinate (VC) based method to identify the node positions and infer the shape of SFs with embedded grids of IoT devices. In various applications, we expect the nodes to communicate through randomly shaped fabrics in the presence of oddly-shaped holes. The geometry of node placement, the shape of the fabric, and dimensionality affect the identification, shape determination, and routing algorithms. The objective of this research is to infer the shape of fabric, holes, and other non-operational parts of the fabric with different grid placements. With the ability to construct the topology, efficient data routing can be achieved, damaged regions of fabric could be identified, and in general, the shape could be inferred for SFs with a wide range of sizes. Clothing and health monitoring being two essential segments of living, SFs that combines both would be a success in the textile market. SFs can be synthesized in space stations as compact sensing devices, assist in patient health monitoring, and also bring a spark to the showbiz. Identifying the position of different nodes/devices within SF grids is essential for applications and networking functions. We study and devise strategic methods for localization of SFs with rectangular grid placement of nodes using the VC…
Advisors/Committee Members: Jayasumana, Anura P. (advisor), Maciejewski, Anthony A. (committee member), Malaiya, Yashwant K. (committee member).
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Pendharkar, G. A. (2020). Topology inference of Smart Fabric grids - a virtual coordinate based approach. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/208450
Chicago Manual of Style (16th Edition):
Pendharkar, Gayatri Arun. “Topology inference of Smart Fabric grids - a virtual coordinate based approach.” 2020. Masters Thesis, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/208450.
MLA Handbook (7th Edition):
Pendharkar, Gayatri Arun. “Topology inference of Smart Fabric grids - a virtual coordinate based approach.” 2020. Web. 08 Mar 2021.
Vancouver:
Pendharkar GA. Topology inference of Smart Fabric grids - a virtual coordinate based approach. [Internet] [Masters thesis]. Colorado State University; 2020. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/208450.
Council of Science Editors:
Pendharkar GA. Topology inference of Smart Fabric grids - a virtual coordinate based approach. [Masters Thesis]. Colorado State University; 2020. Available from: http://hdl.handle.net/10217/208450

Colorado State University
7.
Shah, Pritam.
Virtual coordinate based techniques for wireless sensor networks: a simulation tool and localization & planarization algorithms.
Degree: MS(M.S.), Electrical and Computer Engineering, 2013, Colorado State University
URL: http://hdl.handle.net/10217/80311
► Wireless sensor Networks (WSNs) are deployments of smart sensor devices for monitoring environmental or physical phenomena. These sensors have the ability to communicate with other…
(more)
▼ Wireless sensor Networks (WSNs) are deployments of smart sensor devices for monitoring environmental or physical phenomena. These sensors have the ability to communicate with other sensors within communication range or with a base station. Each sensor, at a minimum, comprises of sensing, processing, transmission, and power units. This thesis focuses on virtual coordinate based techniques in WSNs. Virtual Coordinates (VCs) characterize each node in a network with the minimum hop distances to a set of anchor nodes, as its coordinates. It provides a compelling alternative to some of the localization applications such as routing. Building a WSN testbed is often infeasible and costly. Running real experiments on WSNs testbeds is time consuming, difficult and sometimes not feasible given the scope and size of applications. Simulation is, therefore, the most common approach for developing and testing new protocols and techniques for sensor networks. Though many general and wireless sensor network specific simulation tools are available, no available tool currently provides an intuitive interface or a tool for virtual coordinate based simulations. A simulator called VCSIM is presented which focuses specifically on Virtual Coordinate Space (VCS) in WSNs. With this simulator, a user can easily create WSNs networks of different sizes, shapes, and distributions. Its graphical user interface (GUI) facilitates placement of anchors and generation of VCs. Localization in WSNs is important for several reasons including identification and correlation of gathered data, node addressing, evaluation of nodes' density and coverage, geographic routing, object tracking, and other geographic algorithms. But due to many constraints, such as limited battery power, processing capabilities, hardware costs, and measurement errors, localization still remains a hard problem in WSNs. In certain applications, such as security sensors for intrusion detection, agriculture, land monitoring, and fire alarm sensors in a building, the sensor nodes are always deployed in an orderly fashion, in contrast to random deployments. In this thesis, a novel transformation is presented to obtain position of nodes from VCs in rectangular, hexagonal and triangular grid topologies. It is shown that with certain specific anchor placements, a location of a node can be accurately approximated, if the length of a shortest path in given topology between a node and anchors is equal to length of a shortest path in full topology (i.e. a topology without any voids) between the same node and anchors. These positions are obtained without the need of any extra localization hardware. The results show that more than 90% nodes were able to identify their position in randomly deployed networks of 80% and 85% node density. These positions can then be used for deterministic routing which seems to have better avg. path length compared to geographic routing scheme called "Greedy Perimeter Stateless Routing (GPSR)". In many real world applications, manual deployment is not possible in…
Advisors/Committee Members: Jayasumana, Anura P. (advisor), Pasricha, Sudeep (committee member), Malaiya, Yashwant K. (committee member).
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shah, P. (2013). Virtual coordinate based techniques for wireless sensor networks: a simulation tool and localization & planarization algorithms. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/80311
Chicago Manual of Style (16th Edition):
Shah, Pritam. “Virtual coordinate based techniques for wireless sensor networks: a simulation tool and localization & planarization algorithms.” 2013. Masters Thesis, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/80311.
MLA Handbook (7th Edition):
Shah, Pritam. “Virtual coordinate based techniques for wireless sensor networks: a simulation tool and localization & planarization algorithms.” 2013. Web. 08 Mar 2021.
Vancouver:
Shah P. Virtual coordinate based techniques for wireless sensor networks: a simulation tool and localization & planarization algorithms. [Internet] [Masters thesis]. Colorado State University; 2013. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/80311.
Council of Science Editors:
Shah P. Virtual coordinate based techniques for wireless sensor networks: a simulation tool and localization & planarization algorithms. [Masters Thesis]. Colorado State University; 2013. Available from: http://hdl.handle.net/10217/80311

Colorado State University
8.
Mussa, Awad A. Younis.
Quantifying the security risk of discovering and exploiting software vulnerabilities.
Degree: PhD, Computer Science, 2016, Colorado State University
URL: http://hdl.handle.net/10217/176641
► Most of the attacks on computer systems and networks are enabled by vulnerabilities in a software. Assessing the security risk associated with those vulnerabilities is…
(more)
▼ Most of the attacks on computer systems and networks are enabled by vulnerabilities in a software. Assessing the security risk associated with those vulnerabilities is important. Risk mod- els such as the Common Vulnerability Scoring System (CVSS), Open Web Application Security Project (OWASP) and Common Weakness Scoring System (CWSS) have been used to qualitatively assess the security risk presented by a vulnerability. CVSS metrics are the de facto standard and its metrics need to be independently evaluated. In this dissertation, we propose using a quantitative approach that uses an actual data, mathematical and statistical modeling, data analysis, and measurement. We have introduced a novel vulnerability discovery model, Folded model, that estimates the risk of vulnerability discovery based on the number of residual vulnerabilities in a given software. In addition to estimating the risk of vulnerabilities discovery of a whole system, this dissertation has furthermore introduced a novel metrics termed time to vulnerability discovery to assess the risk of an individual vulnerability discovery. We also have proposed a novel vulnerability exploitability risk measure termed Structural Severity. It is based on software properties, namely attack entry points, vulnerability location, the presence of the dangerous system calls, and reachability analysis. In addition to measurement, this dissertation has also proposed predicting vulnerability exploitability risk using internal software metrics. We have also proposed two approaches for evaluating CVSS Base metrics. Using the availability of exploits, we first have evaluated the performance of the CVSS Exploitability factor and have compared its performance to Microsoft (MS) rating system. The results showed that exploitability metrics of CVSS and MS have a high false positive rate. This finding has motivated us to conduct further investigation. To that end, we have introduced vulnerability reward programs (VRPs) as a novel ground truth to evaluate the CVSS Base scores. The results show that the notable lack of exploits for high severity vulnerabilities may be the result of prioritized fixing of vulnerabilities.
Advisors/Committee Members: Malaiya, Yashwant (advisor), Ray, Indrajit (committee member), Anderson, Charles W. (committee member), Vijayasarathy, Leo (committee member).
Subjects/Keywords: software security; vulnerabilities exploitation; vulnerability rewards program and time to vulnerability disclosure; software vulnerabilities; cvss and OWASP metrics; vulnerabilities risk and severity
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mussa, A. A. Y. (2016). Quantifying the security risk of discovering and exploiting software vulnerabilities. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/176641
Chicago Manual of Style (16th Edition):
Mussa, Awad A Younis. “Quantifying the security risk of discovering and exploiting software vulnerabilities.” 2016. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/176641.
MLA Handbook (7th Edition):
Mussa, Awad A Younis. “Quantifying the security risk of discovering and exploiting software vulnerabilities.” 2016. Web. 08 Mar 2021.
Vancouver:
Mussa AAY. Quantifying the security risk of discovering and exploiting software vulnerabilities. [Internet] [Doctoral dissertation]. Colorado State University; 2016. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/176641.
Council of Science Editors:
Mussa AAY. Quantifying the security risk of discovering and exploiting software vulnerabilities. [Doctoral Dissertation]. Colorado State University; 2016. Available from: http://hdl.handle.net/10217/176641

Colorado State University
9.
Joh, HyunChul.
Quantitative analyses of software vulnerabilities.
Degree: PhD, Computer Science, 2011, Colorado State University
URL: http://hdl.handle.net/10217/70444
► There have been numerous studies addressing computer security and software vulnerability management. Most of the time, they have taken a qualitative perspective. In many other…
(more)
▼ There have been numerous studies addressing computer security and software vulnerability management. Most of the time, they have taken a qualitative perspective. In many other disciplines, quantitative analyses have been indispensable for performance assessment, metric measurement, functional evaluation, or statistical modeling. Quantitative approaches can also help to improve software risk management by providing guidelines obtained by using actual data-driven analyses for optimal allocations of resources for security testing, scheduling, and development of security patches. Quantitative methods allow objective and more accurate estimates of future trends than qualitative manners only because a quantitative approach uses real datasets with statistical methods which have proven to be a very powerful prediction approach in several research fields. A quantitative methodology makes it possible for end-users to assess the risks posed by vulnerabilities in software systems, and potential breaches without getting burdened by details of every individual vulnerability. At the moment, quantitative risk analysis in information security systems is still in its infancy stage. However, recently, researchers have started to explore various software vulnerability related attributes quantitatively as the vulnerability datasets have now become large enough for statistical analyses. In this dissertation, quantitative analysis is presented dealing with i) modeling vulnerability discovery processes in major Web servers and browsers, ii) relationship between the performance of S-shaped vulnerability discovery models and the skew in vulnerability datasets examined, iii) linear vulnerability discovery trends in multi-version software systems, iv) periodic behavior in weekly exploitation and patching of vulnerabilities as well as long term vulnerability discovery process, and v) software security risk evaluation with respect to the vulnerability lifecycle and CVSS. Results show good superior vulnerability discovery model fittings and reasonable prediction capabilities for both time-based and effort-based models for datasets from Web servers and browsers. Results also show that AML and Gamma distribution based models perform better than other S-shaped models with skewed left and right datasets respectively. We find that code sharing among the successive versions cause a linear discovery pattern. We establish that there are indeed long and short term periodic patterns in software vulnerability related activities which have been only vaguely recognized by the security researchers. Lastly, a framework for software security risk assessment is proposed which can allow a comparison of software systems in terms of the risk and potential approaches for optimization of remediation.
Advisors/Committee Members: Malaiya, Yashwant K. (advisor), Ray, Indrajit (committee member), Ray, Indrakshi (committee member), Jayasumana, Anura P. (committee member).
Subjects/Keywords: modeling; quantitative analysis; risk; security; software; vulnerability discovery process
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Joh, H. (2011). Quantitative analyses of software vulnerabilities. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/70444
Chicago Manual of Style (16th Edition):
Joh, HyunChul. “Quantitative analyses of software vulnerabilities.” 2011. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/70444.
MLA Handbook (7th Edition):
Joh, HyunChul. “Quantitative analyses of software vulnerabilities.” 2011. Web. 08 Mar 2021.
Vancouver:
Joh H. Quantitative analyses of software vulnerabilities. [Internet] [Doctoral dissertation]. Colorado State University; 2011. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/70444.
Council of Science Editors:
Joh H. Quantitative analyses of software vulnerabilities. [Doctoral Dissertation]. Colorado State University; 2011. Available from: http://hdl.handle.net/10217/70444

Colorado State University
10.
Wedyan, Fadi.
Testing with state variable data-flow criteria for aspect-oriented programs.
Degree: PhD, Computer Science, 2011, Colorado State University
URL: http://hdl.handle.net/10217/48181
► Data-flow testing approaches have been used for procedural and object-oriented (OO) programs, and empirically shown to be effective in detecting faults. However, few such approaches…
(more)
▼ Data-flow testing approaches have been used for procedural and object-oriented (OO) programs, and empirically shown to be effective in detecting faults. However, few such approaches have been proposed for aspect-oriented (AO) programs. In an AO program, data-flow interactions can occur between the base classes and aspects, which can affect the behavior of both. Faults resulting from such interactions are hard to detect unless the interactions are specifically targeted during testing. In this research, we propose a data-flow testing approach for AO programs. In an AO program, an aspect and a base class interact either through parameters passed from advised methods in the base class to the advice, or by the direct reading and writing of the base class
state variables in the advice. We identify a group of def-use associations (DUAs) that are based on the base class
state variables and propose a set of data-flow test criteria that require executing these DUAs. We identify fault types that result from incorrect data-flow interactions in AO programs and extend an existing AO fault model to include these faults. We implemented our approach in a tool that identifies the targeted DUAs by the proposed criteria, runs a test suite, and computes the coverage results. We conducted an empirical study that compares the cost and effectiveness of the proposed criteria with two control-flow criteria. The empirical study is performed using four subject programs. We seeded faults in the programs using three mutation tools, AjMutator, Proteum/AJ, and μJava. We used a test generation tool, called RANDOOP, to generate a pool of random test cases. To produce a test suite that satisfies a criterion, we randomly selected test cases from the test pool until required coverage for a criterion is reached. We evaluated three dimensions of the cost of a test criterion. The first dimension is the size of a test suite that satisfies a test criterion, which we measured by the number of test cases in the test suite. The second cost dimension is the density of a test case which we measured by the number of test cases in the test suite divided by the number of test requirements. The third cost dimension is the time needed to randomly obtain a test suite that satisfies a criterion, which we measured by (1) the number of iterations required by the test suites generator for randomly selecting test cases from a pool of test cases until a test criterion is satisfied, and (2) the number of the iterations per test requirement. Effectiveness is measured by the mutation scores of the test suites that satisfy a criterion. We evaluated effectiveness for all faults and for each fault type. Our results show that the test suites that cover all the DUAs of
state variables are more effective in revealing faults than the control-flow criteria. However, they cost more in terms of test suite size and effort. The results also show that the test suites that cover
state variable DUAs in advised classes are suitable for detecting most of the fault types in the revised AO…
Advisors/Committee Members: Ghosh, Sudipto (advisor), Bieman, James M. (committee member), Malaiya, Yashwant K. (committee member), Vijayasarathy, Leo (committee member).
Subjects/Keywords: aspectJ; aspect-oriented testing; coverage tool; data-flow criteria; mutation testing; software testing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wedyan, F. (2011). Testing with state variable data-flow criteria for aspect-oriented programs. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/48181
Chicago Manual of Style (16th Edition):
Wedyan, Fadi. “Testing with state variable data-flow criteria for aspect-oriented programs.” 2011. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/48181.
MLA Handbook (7th Edition):
Wedyan, Fadi. “Testing with state variable data-flow criteria for aspect-oriented programs.” 2011. Web. 08 Mar 2021.
Vancouver:
Wedyan F. Testing with state variable data-flow criteria for aspect-oriented programs. [Internet] [Doctoral dissertation]. Colorado State University; 2011. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/48181.
Council of Science Editors:
Wedyan F. Testing with state variable data-flow criteria for aspect-oriented programs. [Doctoral Dissertation]. Colorado State University; 2011. Available from: http://hdl.handle.net/10217/48181

Colorado State University
11.
Woo, Sung-Whan.
Simple and dynamic data structure for pattern matching in texts, A.
Degree: PhD, Computer Science, 2011, Colorado State University
URL: http://hdl.handle.net/10217/48169
► The demand for a pattern matching algorithm is currently on the rise from diverse areas such as string search, image matching, voice recognition and bioinformatics.…
(more)
▼ The demand for a pattern matching algorithm is currently on the rise from diverse areas such as string search, image matching, voice recognition and bioinformatics. In particular, string search or matching algorithms have been growing in popularity as they have been applied to areas such as text editors, search engines and bioinformatics. To satisfy these various demands, many string matching methods have been developed to search for substrings (pattern strings) within a text, and several techniques employ the use of tree data structures, deterministic finite automata, and other structures. The problem of string matching is defined by finding all location of a pattern string P within a text T, where preprocessing of T is allowed in order to facilitate the queries. There has been significant success in finding a pattern string in O(m+k) time, where m is the length of the pattern string and k is the number of occurrences, using data structures that can be constructed in O(n) time, where n is the length of T. Suffix trees and directed acyclic word graphs are such data structures. All of these data structures index the searched text in O(m+k) time. However, the difficulty of understanding and programming the construction algorithms is rarely mentioned. Also, they have significant space requirements and take Θ(n) time to update even if one character of T is changed. To solve these problems, we propose the augmented position heap. It can be built in O(n) time, and can be used to search a pattern string in O(m+k) time. Most importantly, when a block of j characters are inserted or deleted, the asymptotic updating it when a text is modified is O((h(T) + j)h(T)), where h(T) is the length of the longest substring X of T that occurs at least ||X|| times in T, where ||X|| is the length of X. For texts arising from practical applications, h(T) is typically slowly growing function of ||T||; for a random text T, its expected value is O(logn). Another issue in data structures that must be addressed is space requirement. The most space efficient data structure for string search is the suffix array, which uses 2n words and supports searches in O(nlogn + m + k). A compact representation of the position heap proposed in this thesis also takes 2n words, but can be updated in O((h(T) + j)h(T)) time, but takes O(m2+k) time for a search. The best bound known bound for updating the suffix array or the directed acyclic word graph is O(n), and they both take considerably more space. A compact representation proposed in this thesis for the augmented position heap takes 4n words, can be updated just as efficiently as the position heap, and takes O(m+k) time for a search.
Advisors/Committee Members: McConnell, Ross M. (advisor), Bohm, A. P. Willem (committee member), Penttila, Tim (committee member), Malaiya, Yashwant K. (committee member).
Subjects/Keywords: data structure; string searching; pattern matching; dynamic string matching
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Woo, S. (2011). Simple and dynamic data structure for pattern matching in texts, A. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/48169
Chicago Manual of Style (16th Edition):
Woo, Sung-Whan. “Simple and dynamic data structure for pattern matching in texts, A.” 2011. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/48169.
MLA Handbook (7th Edition):
Woo, Sung-Whan. “Simple and dynamic data structure for pattern matching in texts, A.” 2011. Web. 08 Mar 2021.
Vancouver:
Woo S. Simple and dynamic data structure for pattern matching in texts, A. [Internet] [Doctoral dissertation]. Colorado State University; 2011. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/48169.
Council of Science Editors:
Woo S. Simple and dynamic data structure for pattern matching in texts, A. [Doctoral Dissertation]. Colorado State University; 2011. Available from: http://hdl.handle.net/10217/48169

Colorado State University
12.
Yu, Lijun.
Scenario-based technique to analyze UML design class models, A.
Degree: PhD, Computer Science, 2014, Colorado State University
URL: http://hdl.handle.net/10217/82485
► Identifying and resolving design problems in the early design phases can help reduce the number of design errors in implementations. In this dissertation a tool-supported…
(more)
▼ Identifying and resolving design problems in the early design phases can help reduce the number of design errors in implementations. In this dissertation a tool-supported lightweight static analysis technique is proposed to rigorously analyze UML design class models that include operations specified using the Object Constraint Language (OCL). A UML design class model is analyzed against a given set of scenarios that describe desired or undesired behaviors. The technique can leverage existing class model analysis tools such as USE and OCLE. The analysis technique is lightweight in that it analyzes functionality specified in a UML design class model within the scope of a given set of scenarios. It is static because it does not require that the UML design class model be executable. The technique is used to (1) transform a UML design class model to a snapshot transition model that captures valid
state transitions, (2) transform given scenarios to snapshot transitions and (3) determine if the snapshot transitions conform or not to the snapshot transition model. A design inconsistency exists if snapshot transitions that represent desired behaviors do not conform to the snapshot transition model, or if snapshot transitions representing undesired behaviors conform to the snapshot transition model. A Scenario-based UML Design Analysis tool was developed using Kermeta and the Eclipse Modeling Framework. The tool can be used to transform an Ecore design class model to a snapshot transition model and transform scenarios to snapshot transitions. The tool is integrated with the USE analysis tool. We used the Scenario-based UML Design Analysis technique to analyze two design class models: a Train Management System model and a Generalized Spatio-Temporal RBAC model. The two demonstration case studies show how the technique can be used to analyze the inconsistencies between UML design class models and scenarios. We performed a pilot study to evaluate the effectiveness of the Scenario-based UML Design Analysis technique. In the pilot study the technique uncovered at least as many design inconsistencies as manual inspection techniques uncovered, and the technique did not uncover false inconsistencies. The pilot study provides some evidence that the Scenario-based UML Design Analysis technique is effective. The dissertation also proposes two scenario generation techniques. These techniques can be used to ease the manual effort needed to produce scenarios. The scenario generation techniques can be used to automatically generate a family of scenarios that conform to specified scenario generation criteria.
Advisors/Committee Members: France, Robert B. (advisor), Ray, Indrakshi (committee member), Ghosh, Sudipto (committee member), Malaiya, Yashwant (committee member), Turk, Dan (committee member).
Subjects/Keywords: unified modeling language; consistency check; software engineering experiment; scenario; formal verification; formal analysis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yu, L. (2014). Scenario-based technique to analyze UML design class models, A. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/82485
Chicago Manual of Style (16th Edition):
Yu, Lijun. “Scenario-based technique to analyze UML design class models, A.” 2014. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/82485.
MLA Handbook (7th Edition):
Yu, Lijun. “Scenario-based technique to analyze UML design class models, A.” 2014. Web. 08 Mar 2021.
Vancouver:
Yu L. Scenario-based technique to analyze UML design class models, A. [Internet] [Doctoral dissertation]. Colorado State University; 2014. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/82485.
Council of Science Editors:
Yu L. Scenario-based technique to analyze UML design class models, A. [Doctoral Dissertation]. Colorado State University; 2014. Available from: http://hdl.handle.net/10217/82485

Colorado State University
13.
Al Lail, Mustafa.
Unified modeling language framework for specifying and analyzing temporal properties, A.
Degree: PhD, Computer Science, 2018, Colorado State University
URL: http://hdl.handle.net/10217/191492
► In the context of Model-Driven Engineering (MDE), designers use the Unified Modeling Language (UML) to create models that drive the entire development process. Once UML…
(more)
▼ In the context of Model-Driven Engineering (MDE), designers use the Unified Modeling Language (UML) to create models that drive the entire development process. Once UML models are created, MDE techniques automatically generate code from the models. If the models have undetected faults, they are propagated to code where they require considerable time and effort to detect and correct. It is therefore mandatory to analyze UML models at earlier stages of the development life-cycle to ensure the success of the MDE techniques in producing reliable software. One approach to uncovering design errors is to formally specify and analyze the properties that a system has to satisfy. Although significant research appears in specifying and analyzing properties, there is not an effective and efficient UML-based framework that specifies and analyzes temporal properties. The contribution of this dissertation is a UML-based framework and tools for aiding UML designers to effectively and efficiently specify and analyze temporal properties. In particular, the framework is composed of 1) a UML specification technique that designers can use to specify temporal properties, 2) a rigorous analysis technique for analyzing temporal properties, 3) an optimization technique to scale the analysis to large class models, and 4) a proof-of-concept tool. An evaluation of the framework using two real-world studies shows that the specification technique can be used to specify a variety of temporal properties and the analysis technique can uncover certain types of design faults. It also demonstrates that the optimization technique can significantly speed up the analysis.
Advisors/Committee Members: France, Robert B. (advisor), Ray, Indrakshi (advisor), Ray, Indrajit (committee member), Hamid, Idris Samawi (committee member), Malaiya, Yashwant K. (committee member).
Subjects/Keywords: properties; temporal; verification; specification; model checking; UML
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Al Lail, M. (2018). Unified modeling language framework for specifying and analyzing temporal properties, A. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/191492
Chicago Manual of Style (16th Edition):
Al Lail, Mustafa. “Unified modeling language framework for specifying and analyzing temporal properties, A.” 2018. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/191492.
MLA Handbook (7th Edition):
Al Lail, Mustafa. “Unified modeling language framework for specifying and analyzing temporal properties, A.” 2018. Web. 08 Mar 2021.
Vancouver:
Al Lail M. Unified modeling language framework for specifying and analyzing temporal properties, A. [Internet] [Doctoral dissertation]. Colorado State University; 2018. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/191492.
Council of Science Editors:
Al Lail M. Unified modeling language framework for specifying and analyzing temporal properties, A. [Doctoral Dissertation]. Colorado State University; 2018. Available from: http://hdl.handle.net/10217/191492

Colorado State University
14.
Doshi, Saket Sham.
Applications of inertial measurement units in monitoring rehabilitation progress of arm in stroke survivors.
Degree: MS(M.S.), Electrical and Computer Engineering, 2011, Colorado State University
URL: http://hdl.handle.net/10217/48196
► Constraint Induced Movement Therapy (CIMT) has been clinically proven to be effective in restoring functional abilities of the affected arm among stroke survivors. Current CIMT…
(more)
▼ Constraint Induced Movement Therapy (CIMT) has been clinically proven to be effective in restoring functional abilities of the affected arm among stroke survivors. Current CIMT delivery method lacks a robust technique to monitor rehabilitation progress, which results in increasing costs of stroke related health care. Recent advances in the design and manufacturing of Micro Electro Mechanical System (MEMS) inertial sensors have enabled tracking human motions reliably and accurately. This thesis presents three algorithms that enable monitoring of arm movements during CIMT by means of MEMS inertial sensors. The first algorithm quantifies the affected arm usage during CIMT. This algorithm filters the arm movement data, sampled during activities of daily life (ADL), by applying a threshold to determine the duration of affected arm movements. When an activity is performed multiple times, this algorithm counts the number of repetitions performed. Current technique uses a touch/proximity sensor and a motor activity log maintained by the patient to determine CIMT duration. Affected arm motion is a direct indicator of CIMT session and hence this algorithm tracks rehabilitation progress more accurately. Actual patients' affected arm movement data analysis shows that the algorithm does activity detection with an average accuracy of >90%. Second of the three algorithms, tracking stroke rehabilitation of affected arm through histogram of distance traversed, evaluates an objective metric to assess rehabilitation progress. The objective metric can be used to compare different stroke patients based on their functional ability in affected arm. The algorithm calculates the histogram by evaluating distances traversed over a fixed duration window. The impact of this window on algorithm's performance is analyzed. The algorithm has better temporal resolution when compared with another standard objective test, box and block test (BBT). The algorithm calculates linearly weighted area under the histogram as a score to rank various patients as per their rehabilitation progress. The algorithm has better performance for patients with chronic stroke and certain degree of functional ability. Lastly, Kalman filter based motion tracking algorithm is presented that tracks linear motions in 2D, such that only one axis can experience motion at any given time. The algorithm has high (>95%) accuracy. Data representing linear human arm motion along a single axis is generated to analyze and determine optimal parameters of Kalman filter. Cross-axis sensitivity of the accelerometer limits the performance of the algorithm over longer durations. A method to identify the 1D components of 2D motion is developed and cross-axis effects are removed to improve the performance of motion tracking algorithm.
Advisors/Committee Members: Jayasumana, Anura P. (advisor), Malcolm, Matthew P. (committee member), Pasricha, Sudeep (committee member), Malaiya, Yashwant K. (committee member).
Subjects/Keywords: stroke rehabilitation; accelerometers; constraint induced movement therapy; inertial measurement unit; Kalman filter; motion tracking
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Doshi, S. S. (2011). Applications of inertial measurement units in monitoring rehabilitation progress of arm in stroke survivors. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/48196
Chicago Manual of Style (16th Edition):
Doshi, Saket Sham. “Applications of inertial measurement units in monitoring rehabilitation progress of arm in stroke survivors.” 2011. Masters Thesis, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/48196.
MLA Handbook (7th Edition):
Doshi, Saket Sham. “Applications of inertial measurement units in monitoring rehabilitation progress of arm in stroke survivors.” 2011. Web. 08 Mar 2021.
Vancouver:
Doshi SS. Applications of inertial measurement units in monitoring rehabilitation progress of arm in stroke survivors. [Internet] [Masters thesis]. Colorado State University; 2011. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/48196.
Council of Science Editors:
Doshi SS. Applications of inertial measurement units in monitoring rehabilitation progress of arm in stroke survivors. [Masters Thesis]. Colorado State University; 2011. Available from: http://hdl.handle.net/10217/48196
15.
Mosharraf Ghahfarokhi, Negar.
Cooperative defense mechanisms for detection, identification and filtering of DDoS attacks.
Degree: PhD, Electrical and Computer Engineering, 2016, Colorado State University
URL: http://hdl.handle.net/10217/176690
…40
3.4.3 Colorado State University Dataset… …traffic against DDoS attacks by analysis of
network traffic collected from Colorado State… …State University [20] and University of Auckland dataset [6].
The… …University, Auckland University, and attack traffic
from CAIDA.
It is important to minimize the… …evaluated using the DARPA 1998 dataset [19] as well as
extensive analysis over Colorado…
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mosharraf Ghahfarokhi, N. (2016). Cooperative defense mechanisms for detection, identification and filtering of DDoS attacks. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/176690
Chicago Manual of Style (16th Edition):
Mosharraf Ghahfarokhi, Negar. “Cooperative defense mechanisms for detection, identification and filtering of DDoS attacks.” 2016. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/176690.
MLA Handbook (7th Edition):
Mosharraf Ghahfarokhi, Negar. “Cooperative defense mechanisms for detection, identification and filtering of DDoS attacks.” 2016. Web. 08 Mar 2021.
Vancouver:
Mosharraf Ghahfarokhi N. Cooperative defense mechanisms for detection, identification and filtering of DDoS attacks. [Internet] [Doctoral dissertation]. Colorado State University; 2016. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/176690.
Council of Science Editors:
Mosharraf Ghahfarokhi N. Cooperative defense mechanisms for detection, identification and filtering of DDoS attacks. [Doctoral Dissertation]. Colorado State University; 2016. Available from: http://hdl.handle.net/10217/176690
16.
Desai, Srinivas.
Design and analysis of energy-efficient hierarchical electro-photonic network-on-chip architectures.
Degree: MS(M.S.), Electrical and Computer Engineering, 2015, Colorado State University
URL: http://hdl.handle.net/10217/166959
► Future applications running on chip multiprocessors (CMPs) with tens to hundreds of cores on a chip will require an efficient inter-core communication strategy to achieve…
(more)
▼ Future applications running on chip multiprocessors (CMPs) with tens to hundreds of cores on a chip will require an efficient inter-core communication strategy to achieve high performance. With recent demonstrations of feasibility in fabricating photonic components for on-chip communication, researchers are now focusing on photonic communication based on-chip networks for future CMPs. Photonic interconnects offer several benefits over conventional electrical on-chip interconnects, such as (1) high-bandwidth support by making use of dense wavelength division multiplexing, (2) distance independent power consumption, (3) significantly lower latency, and (4) improved performance-per-watt. Owing to these advantages, photonic interconnects are being considered as worthy alternatives for existing electrical networks. In this thesis, we design and explore a hierarchical electro-photonic network-on-chip (NoC) architecture called NOVA. NOVA aims to optimize several key design metrics such as throughput, latency, energy-delay-product, and power, which determine the overall system performance of a CMP. NOVA has three levels of communication hierarchy. The first level has a broadband-resonator based photonic switch. The second level consists of a low-loss, silicon-nitride arrayed waveguide grating based router. The last level of the hierarchy is made up of photonic ring waveguides. We have modeled and simulated multiple configurations of the proposed architecture with different designs of the photonic switch and several arbitration techniques on the photonic rings. This comprehensive analysis of NOVA allows us to arrive at an optimal configuration of the network for a given set of input applications and CMP platform. Finally, experimental results are strong indicators for considering the proposed architecture, as the improvements achieved were up to 6.1×, 55%, 5×, and 5.9× in terms of throughput, latency, energy-delay-product, and power compared to other
state-of-the-art photonic NoC architectures.
Advisors/Committee Members: Pasricha, Suddep (advisor), Rajopadhye, Sanjay (committee member), Malaiya, Yashwant K. (committee member).
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Desai, S. (2015). Design and analysis of energy-efficient hierarchical electro-photonic network-on-chip architectures. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/166959
Chicago Manual of Style (16th Edition):
Desai, Srinivas. “Design and analysis of energy-efficient hierarchical electro-photonic network-on-chip architectures.” 2015. Masters Thesis, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/166959.
MLA Handbook (7th Edition):
Desai, Srinivas. “Design and analysis of energy-efficient hierarchical electro-photonic network-on-chip architectures.” 2015. Web. 08 Mar 2021.
Vancouver:
Desai S. Design and analysis of energy-efficient hierarchical electro-photonic network-on-chip architectures. [Internet] [Masters thesis]. Colorado State University; 2015. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/166959.
Council of Science Editors:
Desai S. Design and analysis of energy-efficient hierarchical electro-photonic network-on-chip architectures. [Masters Thesis]. Colorado State University; 2015. Available from: http://hdl.handle.net/10217/166959
17.
Shirazi, Hossein.
Unbiased phishing detection using domain name based features.
Degree: MS(M.S.), Computer Science, 2018, Colorado State University
URL: http://hdl.handle.net/10217/191323
► Internet users are coming under a barrage of phishing attacks of increasing frequency and sophistication. While these attacks have been remarkably resilient against the vast…
(more)
▼ Internet users are coming under a barrage of phishing attacks of increasing frequency and sophistication. While these attacks have been remarkably resilient against the vast range of defenses proposed by academia, industry, and research organizations, machine learning approaches appear to be a promising one in distinguishing between phishing and legitimate websites. There are three main concerns with existing machine learning approaches for phishing detection. The first concern is there is neither a framework, preferably open-source, for extracting feature and keeping the dataset updated nor an updated dataset of phishing and legitimate website. The second concern is the large number of features used and the lack of validating arguments for the choice of the features selected to train the machine learning classifier. The last concern relates to the type of datasets used in the literature that seems to be inadvertently biased with respect to the features based on URL or content. In this thesis, we describe the implementation of our open-source and extensible framework to extract features and create up-to-date phishing dataset. With having this framework, named Fresh-Phish, we implemented 29 different features that we used to detect whether a given website is legitimate or phishing. We used 26 features that were reported in related work and added 3 new features and created a dataset of 6,000 websites with these features of which 3,000 were malicious and 3,000 were genuine and tested our approach. Using 6 different classifiers we achieved the accuracy of 93% which is a reasonable high in this field. To address the second and third concerns, we put forward the intuition that the domain name of phishing websites is the tell-tale sign of phishing and holds the key to successful phishing detection. We focus on this aspect of phishing websites and design features that explore the relationship of the domain name to the key elements of the website. Our work differs from existing
state-of-the-art as our feature set ensures that there is minimal or no bias with respect to a dataset. Our learning model trains with only seven features and achieves a true positive rate of 98% and a classification accuracy of 97%, on sample dataset. Compared to the
state-of-the-art work, our per data instance processing and classification is 4 times faster for legitimate websites and 10 times faster for phishing websites. Importantly, we demonstrate the shortcomings of using features based on URLs as they are likely to be biased towards dataset collection and usage. We show the robustness of our learning algorithm by testing our classifiers on unknown live phishing URLs and achieve a higher detection accuracy of 99.7% compared to the earlier known best result of 95% detection rate.
Advisors/Committee Members: Ray, Indrakshi (advisor), Malaiya, Yashwant K. (committee member), Vijayasarathy, Leo R. (committee member).
Subjects/Keywords: domain name; phishing; biased datasets; phishing detection; machine learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shirazi, H. (2018). Unbiased phishing detection using domain name based features. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/191323
Chicago Manual of Style (16th Edition):
Shirazi, Hossein. “Unbiased phishing detection using domain name based features.” 2018. Masters Thesis, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/191323.
MLA Handbook (7th Edition):
Shirazi, Hossein. “Unbiased phishing detection using domain name based features.” 2018. Web. 08 Mar 2021.
Vancouver:
Shirazi H. Unbiased phishing detection using domain name based features. [Internet] [Masters thesis]. Colorado State University; 2018. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/191323.
Council of Science Editors:
Shirazi H. Unbiased phishing detection using domain name based features. [Masters Thesis]. Colorado State University; 2018. Available from: http://hdl.handle.net/10217/191323
18.
Belyaev, Kirill.
On component-oriented access control in lightweight virtualized server environments.
Degree: PhD, Computer Science, 2017, Colorado State University
URL: http://hdl.handle.net/10217/185748
► With the advancements in contemporary multi-core CPU architectures and increase in main memory capacity, it is now possible for a server operating system (OS), such…
(more)
▼ With the advancements in contemporary multi-core CPU architectures and increase in main memory capacity, it is now possible for a server operating system (OS), such as Linux, to handle a large number of concurrent services on a single server instance. Individual components of such services may run in different isolated runtime environments, such as chrooted jails or related forms of OS-level containers, and may need restricted access to system resources and the ability to share data and coordinate with each other in a regulated and secure manner. In this dissertation we describe our work on the access control framework for policy formulation, management, and enforcement that allows access to OS resources and also permits controlled data sharing and coordination for service components running in disjoint containerized environments within a single Linux OS server instance. The framework consists of two models and the policy formulation is based on the concept of policy classes for ease of administration and enforcement. The policy classes are managed and enforced through a Lightweight Policy Machine for Linux (LPM) that acts as the centralized reference monitor and provides a uniform interface for regulating access to system resources and requesting data and control objects. We present the details of our framework and also discuss the preliminary implementation and evaluation to demonstrate the feasibility of our approach.
Advisors/Committee Members: Ray, Indrakshi (advisor), Ray, Indrajit (committee member), Malaiya, Yashwant (committee member), Vijayasarathy, Leo (committee member).
Subjects/Keywords: data and application security; security architectures; tuple spaces; denial of service protection; access control; service and systems design
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Belyaev, K. (2017). On component-oriented access control in lightweight virtualized server environments. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/185748
Chicago Manual of Style (16th Edition):
Belyaev, Kirill. “On component-oriented access control in lightweight virtualized server environments.” 2017. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/185748.
MLA Handbook (7th Edition):
Belyaev, Kirill. “On component-oriented access control in lightweight virtualized server environments.” 2017. Web. 08 Mar 2021.
Vancouver:
Belyaev K. On component-oriented access control in lightweight virtualized server environments. [Internet] [Doctoral dissertation]. Colorado State University; 2017. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/185748.
Council of Science Editors:
Belyaev K. On component-oriented access control in lightweight virtualized server environments. [Doctoral Dissertation]. Colorado State University; 2017. Available from: http://hdl.handle.net/10217/185748
19.
Chittamuru, Sai Vineel Reddy.
Reliable, energy-efficient, and secure silicon photonic network-on-chip design for manycore architectures.
Degree: PhD, Electrical and Computer Engineering, 2018, Colorado State University
URL: http://hdl.handle.net/10217/189288
► Advances in technology scaling over the past s+H91everal decades have enabled the integration of billions of transistors on a single die. Such a massive number…
(more)
▼ Advances in technology scaling over the past s+H91everal decades have enabled the integration of billions of transistors on a single die. Such a massive number of transistors has allowed multiple processing cores and significant memory to be integrated on a chip, to meet the rapidly growing performance demands of modern applications. These on-chip processing and memory components require an efficient mechanism to communicate with each other. Thus emerging manycore architectures with high core counts have adopted scalable packet switched electrical network-on-chip (ENoC) fabrics to support on-chip transfers. But with several hundreds to thousands of on-chip cores expected to become a reality in the near future, ENoCs are projected to suffer from cripplingly high power dissipation and limited performance. Recent developments in the area of silicon photonics have enabled the integration of on-chip photonic interconnects with CMOS circuits, enabling photonic networks-on-chip (PNoCs) that can offer ultra-high bandwidth, reduced power dissipation, and lower latency than ENoCs. There are several challenges that hinder the commercial adoption of these PNoC architectures. Especially, the operation of silicon photonic components is very sensitive to thermal variations (TV) and process variations (PV) that frequently occur on a chip. These variations and their mitigation techniques create significant reliability issues and increase energy costs in PNoCs. Furthermore, photonic components are susceptible to intrinsic crosstalk noise and aging, which demands higher energy for reliable communication. Moreover, contention in photonic waveguides as well as laser power distribution overheads also reduce performance and energy-efficiency. In addition, hardware trojans (HTs) in the electrical circuitry of photonic components lead to covert data snooping from shared photonic waveguides and introduces serious hardware security threats. To address these challenges, in this dissertation we propose a cross-layer framework towards the design of reliable, secure, and energy-efficient PNoC architectures. We devise layer-specific solutions for PNoC design as part of our framework: (i) we propose device-level enhancements to adapt to TV, and to mitigate heterodyne crosstalk and intermodulation effect induced heterodyne crosstalk; we also analyze aging in photonic components and explore its impact on PNoCs; (ii) at the circuit-level we propose PV-aware homodyne and heterodyne crosstalk mitigation mechanisms, a PV-aware security enhancement mechanism, and TV- and PV-aware photonic component assignment mechanisms; (iii) at the architecture-level we propose new application specific and reconfigurable PNoC architectures to improve photonic channel utilization, a laser power management scheme across components of PNoC architectures, and a reservation-assisted security enhancement scheme to improve security in PNoC architectures; and (iv) at the system-level we propose TV and PV aware thread migration schemes and application scheduling schemes that…
Advisors/Committee Members: Pasricha, Sudeep (advisor), Jayasumana, Anura (committee member), Roy, Sourajeet (committee member), Malaiya, Yashwant K. (committee member).
Subjects/Keywords: hardware security; photonic network on chip; reliability; MR aging; crosstalk noise; process and thermal variations
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chittamuru, S. V. R. (2018). Reliable, energy-efficient, and secure silicon photonic network-on-chip design for manycore architectures. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/189288
Chicago Manual of Style (16th Edition):
Chittamuru, Sai Vineel Reddy. “Reliable, energy-efficient, and secure silicon photonic network-on-chip design for manycore architectures.” 2018. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/189288.
MLA Handbook (7th Edition):
Chittamuru, Sai Vineel Reddy. “Reliable, energy-efficient, and secure silicon photonic network-on-chip design for manycore architectures.” 2018. Web. 08 Mar 2021.
Vancouver:
Chittamuru SVR. Reliable, energy-efficient, and secure silicon photonic network-on-chip design for manycore architectures. [Internet] [Doctoral dissertation]. Colorado State University; 2018. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/189288.
Council of Science Editors:
Chittamuru SVR. Reliable, energy-efficient, and secure silicon photonic network-on-chip design for manycore architectures. [Doctoral Dissertation]. Colorado State University; 2018. Available from: http://hdl.handle.net/10217/189288
20.
Sun, Wuliang.
Using slicing techniques to support scalable rigorous analysis of class models.
Degree: PhD, Computer Science, 2015, Colorado State University
URL: http://hdl.handle.net/10217/166933
► Slicing is a reduction technique that has been applied to class models to support model comprehension, analysis, and other modeling activities. In particular, slicing techniques…
(more)
▼ Slicing is a reduction technique that has been applied to class models to support model comprehension, analysis, and other modeling activities. In particular, slicing techniques can be used to produce class model fragments that include only those elements needed to analyze semantic properties of interest. However, many of the existing class model slicing techniques do not take constraints (invariants and operation contracts) expressed in auxiliary constraint languages into consideration when producing model slices. Their applicability is thus limited to situations in which the determination of slices does not require information found in constraints. In this dissertation we describe our work on class model slicing techniques that take into consideration constraints expressed in the Object Constraint Language (OCL). The slicing techniques described in the dissertation can be used to produce model fragments that each consists of only the model elements needed to analyze specified properties. The slicing techniques are intended to enhance the scalability of class model analysis that involves (1) checking conformance between an object configuration and a class model with specified invariants and (2) analyzing sequences of operation invocations to uncover invariant violations. The slicing techniques are used to produce model fragments that can be analyzed separately. An evaluation we performed provides evidence that the proposed slicing techniques can significantly reduce the time to perform the analysis.
Advisors/Committee Members: Ray, Indrakshi (advisor), Bieman, James M. (committee member), Malaiya, Yashwant K. (committee member), Cooley, Daniel S. (committee member).
Subjects/Keywords: slicing; class model; UML
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sun, W. (2015). Using slicing techniques to support scalable rigorous analysis of class models. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/166933
Chicago Manual of Style (16th Edition):
Sun, Wuliang. “Using slicing techniques to support scalable rigorous analysis of class models.” 2015. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/166933.
MLA Handbook (7th Edition):
Sun, Wuliang. “Using slicing techniques to support scalable rigorous analysis of class models.” 2015. Web. 08 Mar 2021.
Vancouver:
Sun W. Using slicing techniques to support scalable rigorous analysis of class models. [Internet] [Doctoral dissertation]. Colorado State University; 2015. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/166933.
Council of Science Editors:
Sun W. Using slicing techniques to support scalable rigorous analysis of class models. [Doctoral Dissertation]. Colorado State University; 2015. Available from: http://hdl.handle.net/10217/166933
21.
Lazrig, Ibrahim Meftah.
Privacy preserving linkage and sharing of sensitive data.
Degree: PhD, Computer Science, 2018, Colorado State University
URL: http://hdl.handle.net/10217/191341
► Sensitive data, such as personal and business information, is collected by many service providers nowadays. This data is considered as a rich source of information…
(more)
▼ Sensitive data, such as personal and business information, is collected by many service providers nowadays. This data is considered as a rich source of information for research purposes that could benet individuals, researchers and service providers. However, because of the sensitivity of such data, privacy concerns, legislations, and con ict of interests, data holders are reluctant to share their data with others. Data holders typically lter out or obliterate privacy related sensitive information from their data before sharing it, which limits the utility of this data and aects the accuracy of research. Such practice will protect individuals' privacy; however it prevents researchers from linking records belonging to the same individual across dierent sources. This is commonly referred to as record linkage problem by the healthcare industry. In this dissertation, our main focus is on designing and implementing ecient privacy preserving methods that will encourage sensitive information sources to share their data with researchers without compromising the privacy of the clients or aecting the quality of the research data. The proposed solution should be scalable and ecient for real-world deploy- ments and provide good privacy assurance. While this problem has been investigated before, most of the proposed solutions were either considered as partial solutions, not accurate, or impractical, and therefore subject to further improvements. We have identied several issues and limitations in the
state of the art solutions and provided a number of contributions that improve upon existing solutions. Our rst contribution is the design of privacy preserving record linkage protocol using semi-trusted third party. The protocol allows a set of data publishers (data holders) who compete with each other, to share sensitive information with subscribers (researchers) while preserving the privacy of their clients and without sharing encryption keys. Our second contribution is the design and implementation of a probabilistic privacy preserving record linkage protocol, that accommodates discrepancies and errors in the data such as typos. This work builds upon the previous work by linking the records that are similar, where the similarity range is formally dened. Our third contribution is a protocol that performs information integration and sharing without third party services. We use garbled circuits secure computation to design and build a system to perform the record linkages between two parties without sharing their data. Our design uses Bloom lters as inputs to the garbled circuits and performs a probabilistic record linkage using the Dice coecient similarity measure. As garbled circuits are known for their expensive computations, we propose new approaches that reduce the computation overhead needed, to achieve a given level of privacy. We built a scalable record linkage system using garbled circuits, that could be deployed in a distributed computation environment like the cloud, and evaluated its security and performance. One of the…
Advisors/Committee Members: Ray, Indrakshi (advisor), Ray, Indrajit (advisor), Malaiya, Yashwant (committee member), Vijayasarathy, Leo (committee member), Ong, Toan (committee member).
Subjects/Keywords: record-linkage; privacy; security
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lazrig, I. M. (2018). Privacy preserving linkage and sharing of sensitive data. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/191341
Chicago Manual of Style (16th Edition):
Lazrig, Ibrahim Meftah. “Privacy preserving linkage and sharing of sensitive data.” 2018. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/191341.
MLA Handbook (7th Edition):
Lazrig, Ibrahim Meftah. “Privacy preserving linkage and sharing of sensitive data.” 2018. Web. 08 Mar 2021.
Vancouver:
Lazrig IM. Privacy preserving linkage and sharing of sensitive data. [Internet] [Doctoral dissertation]. Colorado State University; 2018. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/191341.
Council of Science Editors:
Lazrig IM. Privacy preserving linkage and sharing of sensitive data. [Doctoral Dissertation]. Colorado State University; 2018. Available from: http://hdl.handle.net/10217/191341

Colorado State University
22.
Hossain, KM Mozammel.
Design methodology and productivity improvement in high speed VLSI circuits.
Degree: PhD, Electrical and Computer Engineering, 2017, Colorado State University
URL: http://hdl.handle.net/10217/181405
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hossain, K. M. (2017). Design methodology and productivity improvement in high speed VLSI circuits. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/181405
Chicago Manual of Style (16th Edition):
Hossain, KM Mozammel. “Design methodology and productivity improvement in high speed VLSI circuits.” 2017. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/181405.
MLA Handbook (7th Edition):
Hossain, KM Mozammel. “Design methodology and productivity improvement in high speed VLSI circuits.” 2017. Web. 08 Mar 2021.
Vancouver:
Hossain KM. Design methodology and productivity improvement in high speed VLSI circuits. [Internet] [Doctoral dissertation]. Colorado State University; 2017. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/181405.
Council of Science Editors:
Hossain KM. Design methodology and productivity improvement in high speed VLSI circuits. [Doctoral Dissertation]. Colorado State University; 2017. Available from: http://hdl.handle.net/10217/181405

Colorado State University
23.
Kim, Jin Yoo.
Vulnerability discovery in multiple version software systems: open source and commercial software systems.
Degree: MS(M.S.), Computer Science, 2007, Colorado State University
URL: http://hdl.handle.net/10217/26808
► The vulnerability discovery process for a program describes the rate at which the vulnerabilities are discovered. A model of the discovery process can be used…
(more)
▼ The vulnerability discovery process for a program describes the rate at which the vulnerabilities are discovered. A model of the discovery process can be used to estimate the number of vulnerabilities likely to be discovered in the near future. Past studies have considered vulnerability discovery only for individual software versions, without considering the impact of shared code among successive versions and the evolution of source code. These affecting factors in vulnerability discovery process need to be taken into account estimate the future software vulnerability discovery trend more accurately. This thesis examines possible approaches for taking these factors into account in the previous works. We implemented these factors on vulnerability discovery process. We examine a new approach for quantitatively vulnerability discovery process, based on shared source code measurements among multiple version software system. The applicability of the approach is examined using Apache HTTP Web server and Mysql DataBase Management System (DBMS). The result of this approach shows better goodness of fit than fitting result in the previous researches. Using this revised software vulnerability discovery process, the superposition effect which is an unexpected vulnerability discovery in the previous researches could be determined by software discovery model. The multiple software vulnerability discovery model (MVDM) shows that vulnerability discovery rate is different with single vulnerability discovery model's (SVDM) discovery rate because of newly considered factors. From these result, we create and applied new SVDM for open source and commercial software. This single vulnerability process is examined, and the model testing result shows that SVDM can be an alternative modeling. The modified vulnerability discovery model will be presented for supporting previous researches' weakness, and the theoretical modeling will be discuss for more accurate explanation.
Advisors/Committee Members: Malaiya, Yashwant K. (advisor), Jayasumana, Anura P. (committee member), Ray, Indrakshi (committee member).
Subjects/Keywords: DBMS; VDM; Apache HTTP web server; Mysql; database management system; software vulnerability discovery model; Computer viruses; Spyware (Computer software); Computer security – Methodology; Data protection
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kim, J. Y. (2007). Vulnerability discovery in multiple version software systems: open source and commercial software systems. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/26808
Chicago Manual of Style (16th Edition):
Kim, Jin Yoo. “Vulnerability discovery in multiple version software systems: open source and commercial software systems.” 2007. Masters Thesis, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/26808.
MLA Handbook (7th Edition):
Kim, Jin Yoo. “Vulnerability discovery in multiple version software systems: open source and commercial software systems.” 2007. Web. 08 Mar 2021.
Vancouver:
Kim JY. Vulnerability discovery in multiple version software systems: open source and commercial software systems. [Internet] [Masters thesis]. Colorado State University; 2007. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/26808.
Council of Science Editors:
Kim JY. Vulnerability discovery in multiple version software systems: open source and commercial software systems. [Masters Thesis]. Colorado State University; 2007. Available from: http://hdl.handle.net/10217/26808

Colorado State University
24.
AlSagheer, Abdullah.
Kuwaiti engineers' perspectives of the engineering senior design (capstone) course as related to their professional experiences.
Degree: PhD, Education, 2010, Colorado State University
URL: http://hdl.handle.net/10217/44765
► This study looks into transfer of learning and its application in the actual employment of engineering students after graduation. At Kuwait University, a capstone course…
(more)
▼ This study looks into transfer of learning and its application in the actual employment of engineering students after graduation. At Kuwait
University, a capstone course is being offered that aims to ensure that students amalgamate all kinds of engineering skills to apply to their work. Within a basic interpretive, qualitative study-design methodology, I interviewed 12 engineers who have recently experienced the senior design course at Kuwait
University and are presently working in industry. From the analysis, four basic themes emerged that further delineate the focus of the entire study. The themes are 1) need for the capstone course, 2) applicability of and problems with the capstone course, 3) industry problems with training, and 4) students' attitudes toward the capstone course. The study concludes that participants are not transferring engineering skills; rather, they are transferring all types of instructions they have been given during their course of study at the
university. A frequent statement is that the capstone course should be improved and specifically that it is necessary to improve upon the timing, schedule, teachers' behavior, contents, and format. The study concludes that Kuwaiti engineers on the whole face problems with time management and management support. The study includes some implications for Kuwait
University and recommendations that can provide significant support for the development of the Senior Design (Capstone) Course. For examples: the project must be divided into phases to ensure timely completion of deliverables. In order to motivate students for hard work and to achieve true transfer of learning, Kuwait
University is required to communicate with certain organizations to place its students at their research centers for capstone projects. All universities, including Kuwait
University, should hire faculty specifically to run the capstone course. In conclusion, the study includes some suggestions for further research studies focused on issues related to the Senior Design (Capstone) Course. Future researchers should focus on developing the project-based course in earlier stages of students' educational program by investigating more about the relationship between student achievement and the market demand.
Advisors/Committee Members: Quick, Donald Gene (advisor), Anderson, Sharon K. (committee member), Banning, James H. (committee member), Malaiya, Yashwant K. (committee member).
Subjects/Keywords: Jāmiʻat al-Kuwayt; strategic management; senior design; Kuwait University; engineering education; capstone course; Engineering – Study and teaching (Higher) – Kuwait; Learning, Psychology of – Kuwait; Professional education – Curricula – Kuwait; Transfer of training – Kuwait
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
AlSagheer, A. (2010). Kuwaiti engineers' perspectives of the engineering senior design (capstone) course as related to their professional experiences. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/44765
Chicago Manual of Style (16th Edition):
AlSagheer, Abdullah. “Kuwaiti engineers' perspectives of the engineering senior design (capstone) course as related to their professional experiences.” 2010. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/44765.
MLA Handbook (7th Edition):
AlSagheer, Abdullah. “Kuwaiti engineers' perspectives of the engineering senior design (capstone) course as related to their professional experiences.” 2010. Web. 08 Mar 2021.
Vancouver:
AlSagheer A. Kuwaiti engineers' perspectives of the engineering senior design (capstone) course as related to their professional experiences. [Internet] [Doctoral dissertation]. Colorado State University; 2010. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/44765.
Council of Science Editors:
AlSagheer A. Kuwaiti engineers' perspectives of the engineering senior design (capstone) course as related to their professional experiences. [Doctoral Dissertation]. Colorado State University; 2010. Available from: http://hdl.handle.net/10217/44765

Colorado State University
25.
Dinh-Trong, Trung T.
Systematic approach to testing UML designs, A.
Degree: PhD, Computer Science, 2006, Colorado State University
URL: http://hdl.handle.net/10217/26932
► In Model Driven Engineering (MDE) approaches, developers create and refine design models from which substantial portions of implementations are generated. During refinement, undetected faults in…
(more)
▼ In Model Driven Engineering (MDE) approaches, developers create and refine design models from which substantial portions of implementations are generated. During refinement, undetected faults in abstract model can traverse into the refined model, and eventually into code. Hence, finding and removing faults in design models is essential for MDE approaches to succeed. This dissertation describes approach to finding faults in design models created using the Unified Modeling Language (UML). Executable forms of UML design models are exercised using generated test inputs that provide coverage with respect to UML-based coverage criteria. The UML designs that are tested consist of class diagrams, sequence diagrams and activity diagrams. The contribution of the dissertation includes (1) a test input generation technique, (2) an approach to execute design models describing sequential behavior with test inputs in order to detect faults, and (3) a set of pilot studies that are carried out to explore the fault detection capability of our testing approach. The test input generation technique involves analyzing design models under test to produce test inputs that satisfy UML sequence diagram coverage criteria. We defined a directed graph structure, named Variable Assignment Graph (VAG), to generate test inputs. The VAG combines information from class and sequence diagrams. Paths are selected from the VAG and constraints are identified to traverse the paths. The constraints are then solved with a constraint solver. The model execution technique involves transforming each design under test into an executable from, which is exercised with the general inputs. Failures are reported if the observed behavior differs from the expected behavior. We proposed an action language, named Java-like Action Language (JAL), that supports the UML action semantics. We developed a prototype tool, named UMLAnT, that performs test execution and animation of design models. We performed pilot studies to evaluate the fault detection effectiveness of our approach. Mutation faults and commonly occurring faults in UML models created by students in our software engineering courses were seeded in three design models. Ninety percent of the seeded faults were detected using our approach.
Advisors/Committee Members: France, Robert B. (advisor), Ghosh, Sudipto (advisor), Bieman, James M. (committee member), Malaiya, Yashwant K. (committee member), Fan, Chuen-mei (committee member).
Subjects/Keywords: VAG; model driven engineering; JAL; Java-like action language; fault detection; unified modeling language; UMLAnT; UML; variable assignment graph; MDE; UML (Computer science); Computer software – Development
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Dinh-Trong, T. T. (2006). Systematic approach to testing UML designs, A. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/26932
Chicago Manual of Style (16th Edition):
Dinh-Trong, Trung T. “Systematic approach to testing UML designs, A.” 2006. Doctoral Dissertation, Colorado State University. Accessed March 08, 2021.
http://hdl.handle.net/10217/26932.
MLA Handbook (7th Edition):
Dinh-Trong, Trung T. “Systematic approach to testing UML designs, A.” 2006. Web. 08 Mar 2021.
Vancouver:
Dinh-Trong TT. Systematic approach to testing UML designs, A. [Internet] [Doctoral dissertation]. Colorado State University; 2006. [cited 2021 Mar 08].
Available from: http://hdl.handle.net/10217/26932.
Council of Science Editors:
Dinh-Trong TT. Systematic approach to testing UML designs, A. [Doctoral Dissertation]. Colorado State University; 2006. Available from: http://hdl.handle.net/10217/26932
.