You searched for +publisher:"Virginia Tech" +contributor:("Vullikanti, Anil Kumar S.")
.
Showing records 1 – 30 of
67 total matches.
◁ [1] [2] [3] ▶
1.
Gupta, Aparna.
Finding Succinct Representations For Clusters.
Degree: MS, Computer Science and Applications, 2019, Virginia Tech
URL: http://hdl.handle.net/10919/91388
► Improving the explainability of results from machine learning methods has become an important research goal. Clustering is a commonly used Machine Learning technique which is…
(more)
▼ Improving the explainability of results from machine learning methods has become an important research goal. Clustering is a commonly used Machine Learning technique which is performed on a variety of datasets. In this thesis, we have studied the problem of making clusters more interpretable; and have tried to answer whether it is possible to explain clusters using a set of attributes which were not used while generating these clusters.
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Vullikanti, Anil Kumar S. (committee member), Swarup, Samarth (committee member).
Subjects/Keywords: clustering; integer programming
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gupta, A. (2019). Finding Succinct Representations For Clusters. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/91388
Chicago Manual of Style (16th Edition):
Gupta, Aparna. “Finding Succinct Representations For Clusters.” 2019. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/91388.
MLA Handbook (7th Edition):
Gupta, Aparna. “Finding Succinct Representations For Clusters.” 2019. Web. 22 Jan 2021.
Vancouver:
Gupta A. Finding Succinct Representations For Clusters. [Internet] [Masters thesis]. Virginia Tech; 2019. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/91388.
Council of Science Editors:
Gupta A. Finding Succinct Representations For Clusters. [Masters Thesis]. Virginia Tech; 2019. Available from: http://hdl.handle.net/10919/91388
2.
Rangudu, Venkata Pavan Kumar.
Inferring Network Status from Partial Observations.
Degree: MS, Computer Science and Applications, 2017, Virginia Tech
URL: http://hdl.handle.net/10919/74983
► In many network applications, such as the Internet and infrastructure networks, nodes fail or get congested dynamically, but tracking this information about all the nodes…
(more)
▼ In many network applications, such as the Internet and infrastructure
networks, nodes fail or get congested dynamically, but tracking this
information about all the nodes in a network where some dynamical processes
are taking place is a fundamental problem. In this work, we study the problem
of inferring the complete set of failed nodes, when only a sample of the node
failures are known – we will be referring to this particular problem as prob{}
. We consider the setting in which there exists correlations between node
failures in networks, which has been studied in the case of many
infrastructure networks. We formalize the prob{} problem using the Minimum
Description Length (MDL) principle and we show that, in general, finding
solutions that minimize the MDL cost is hard, and develop efficient algorithms
with rigorous performance guarantees for finding near-optimal MDL cost
solutions. We evaluate our methods on both synthetic and real world datasets,
which includes the one from WAZE. WAZE is a crowd-sourced road navigation
tool, that collects and presents the traffic incident reports. We found that
the proposed greedy algorithm for this problem is able to recover 80%, on
average, of the failed nodes in a network for a given partial sample of input
failures, which are sampled from the true set of failures at some predefined
rate. Furthermore, we have also proved that this algorithm will find a
solution that has MDL cost with an additive approximation guarantee of log(n)
from the optimal.
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Vullikanti, Anil Kumar S. (committee member), Bisset, Keith R. (committee member).
Subjects/Keywords: Network Topology Inference; Network Tomography; Minimum Description Length
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rangudu, V. P. K. (2017). Inferring Network Status from Partial Observations. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/74983
Chicago Manual of Style (16th Edition):
Rangudu, Venkata Pavan Kumar. “Inferring Network Status from Partial Observations.” 2017. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/74983.
MLA Handbook (7th Edition):
Rangudu, Venkata Pavan Kumar. “Inferring Network Status from Partial Observations.” 2017. Web. 22 Jan 2021.
Vancouver:
Rangudu VPK. Inferring Network Status from Partial Observations. [Internet] [Masters thesis]. Virginia Tech; 2017. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/74983.
Council of Science Editors:
Rangudu VPK. Inferring Network Status from Partial Observations. [Masters Thesis]. Virginia Tech; 2017. Available from: http://hdl.handle.net/10919/74983
3.
Asathulla, Mudabir Kabir.
A Sparsification Based Algorithm for Maximum-Cardinality Bipartite Matching in Planar Graphs.
Degree: MS, Computer Engineering, 2017, Virginia Tech
URL: http://hdl.handle.net/10919/88080
► Matching is one of the most fundamental algorithmic graph problems. Many variants of matching problems have been studied on different classes of graphs, the one…
(more)
▼ Matching is one of the most fundamental algorithmic graph problems. Many variants of
matching problems have been studied on different classes of graphs, the one of special interest
to us being the Maximum Cardinality Bipartite Matching in Planar Graphs. In this work,
we present a novel sparsification based approach for computing maximum/perfect bipartite
matching in planar graphs. The overall complexity of our algorithm is O(n
6/5 log
2 {n}) where n
is the number of vertices in the graph, bettering the O(n
3/2) time achieved independently by
Hopcroft-Karp algorithm and by Lipton and Tarjan divide and conquer approach using planar
separators. Our algorithm combines the best of both these standard algorithms along with
our sparsification technique and rich planar graph properties to achieve the speed up. Our
algorithm is not the fastest, with the existence of O(nlog
3 {n}) algorithm based on max-flow
reduction.
Advisors/Committee Members: Vullikanti, Anil Kumar S. (committeechair), Raghvendra, Sharath (committeechair), Zeng, Haibo (committee member).
Subjects/Keywords: matching; maximum cardinality; bipartite; planar graph; planar separators
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Asathulla, M. K. (2017). A Sparsification Based Algorithm for Maximum-Cardinality Bipartite Matching in Planar Graphs. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/88080
Chicago Manual of Style (16th Edition):
Asathulla, Mudabir Kabir. “A Sparsification Based Algorithm for Maximum-Cardinality Bipartite Matching in Planar Graphs.” 2017. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/88080.
MLA Handbook (7th Edition):
Asathulla, Mudabir Kabir. “A Sparsification Based Algorithm for Maximum-Cardinality Bipartite Matching in Planar Graphs.” 2017. Web. 22 Jan 2021.
Vancouver:
Asathulla MK. A Sparsification Based Algorithm for Maximum-Cardinality Bipartite Matching in Planar Graphs. [Internet] [Masters thesis]. Virginia Tech; 2017. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/88080.
Council of Science Editors:
Asathulla MK. A Sparsification Based Algorithm for Maximum-Cardinality Bipartite Matching in Planar Graphs. [Masters Thesis]. Virginia Tech; 2017. Available from: http://hdl.handle.net/10919/88080

Virginia Tech
4.
Ramesh, Shreyas.
Deep Learning for Taxonomy Prediction.
Degree: MS, Computer Science and Applications, 2019, Virginia Tech
URL: http://hdl.handle.net/10919/89752
► Taxonomy prediction is a science involving the hierarchical classification of DNA fragments up to the rank species. Given species diversity on Earth, taxonomy prediction gets…
(more)
▼ Taxonomy prediction is a science involving the hierarchical classification of DNA fragments up to the rank species. Given species diversity on Earth, taxonomy prediction gets challenging with (i) increasing number of species (labels) to classify and (ii) decreasing input (DNA) size. In this research, we introduce Predicting Linked Organisms, Plinko, for short. Plinko is a fully-functioning, state-of-the-art predictive system that accurately captures DNA - Taxonomy relationships where other state-of-the-art algorithms falter. Three major challenges in taxonomy prediction are (i) large dataset sizes (order of 109 sequences) (ii) large label spaces (order of 103 labels) and (iii) low resolution inputs (100 base pairs or less). Plinko leverages multi-view convolutional neural networks and the pre-defined taxonomy tree structure to improve multi-level taxonomy prediction for hard to classify sequences under the three conditions stated above. Plinko has the advantage of relatively low storage footprint, making the solution portable, and scalable with anticipated genome database growth. To the best of our knowledge, Plinko is the first to use multi-view convolutional neural networks as the core algorithm in a compositional, alignment-free approach to taxonomy prediction.
Advisors/Committee Members: Marathe, Madhav V. (committeechair), Warren, Andrew S. (committeechair), Vullikanti, Anil Kumar S. (committee member).
Subjects/Keywords: taxonomy prediction; convolutional neural networks; hierarchical prediction; cnn; taxonomic binning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ramesh, S. (2019). Deep Learning for Taxonomy Prediction. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/89752
Chicago Manual of Style (16th Edition):
Ramesh, Shreyas. “Deep Learning for Taxonomy Prediction.” 2019. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/89752.
MLA Handbook (7th Edition):
Ramesh, Shreyas. “Deep Learning for Taxonomy Prediction.” 2019. Web. 22 Jan 2021.
Vancouver:
Ramesh S. Deep Learning for Taxonomy Prediction. [Internet] [Masters thesis]. Virginia Tech; 2019. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/89752.
Council of Science Editors:
Ramesh S. Deep Learning for Taxonomy Prediction. [Masters Thesis]. Virginia Tech; 2019. Available from: http://hdl.handle.net/10919/89752

Virginia Tech
5.
Lindsay, Aaron Charles.
LWFG: A Cache-Aware Multi-core Real-Time Scheduling Algorithm.
Degree: MS, Computer Science, 2012, Virginia Tech
URL: http://hdl.handle.net/10919/33541
► As the number of processing cores contained in modern processors continues to increase, cache hierarchies are becoming more complex. This added complexity has the effect…
(more)
▼ As the number of processing cores contained in modern processors continues to increase, cache hierarchies are becoming more complex. This added complexity has the effect of increasing the potential cost of any cache misses on such architectures. When cache misses become more costly, minimizing them becomes even more important, particularly in terms of scalability concerns.
In this thesis, we consider the problem of cache-aware real-time scheduling on multiprocessor systems. One avenue for improving real-time performance on multi-core platforms is task partitioning. Partitioning schemes statically assign tasks to cores, eliminating task migrations and reducing system overheads. Unfortunately, no current partitioning schemes explicitly consider cache effects when partitioning tasks.
We develop the LWFG (Largest Working set size First, Grouping) cache-aware partitioning algorithm, which seeks to schedule tasks which share memory with one another in such a way as to minimize the total number of cache misses. LWFG minimizes cache misses by partitioning tasks that share memory onto the same core and by distributing the system'
s sum working set size as evenly as possible across the available cores.
We evaluate the LWFG partitioning algorithm against several other commonly-used partitioning heuristics on a modern 48-core platform running ChronOS Linux. Our evaluation shows that in some cases, the LWFG partitioning algorithm increases execution efficiency by as much as 15% (measured by instructions per cycle) and decreases mean maximum tardiness by up to 60%.
Advisors/Committee Members: Vullikanti, Anil Kumar S. (committee member), Kafura, Dennis G. (committeecochair), Ravindran, Binoy (committeecochair).
Subjects/Keywords: Linux; Real-Time; Scheduling; Multiprocessors; Cache-aware; Partitioning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lindsay, A. C. (2012). LWFG: A Cache-Aware Multi-core Real-Time Scheduling Algorithm. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/33541
Chicago Manual of Style (16th Edition):
Lindsay, Aaron Charles. “LWFG: A Cache-Aware Multi-core Real-Time Scheduling Algorithm.” 2012. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/33541.
MLA Handbook (7th Edition):
Lindsay, Aaron Charles. “LWFG: A Cache-Aware Multi-core Real-Time Scheduling Algorithm.” 2012. Web. 22 Jan 2021.
Vancouver:
Lindsay AC. LWFG: A Cache-Aware Multi-core Real-Time Scheduling Algorithm. [Internet] [Masters thesis]. Virginia Tech; 2012. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/33541.
Council of Science Editors:
Lindsay AC. LWFG: A Cache-Aware Multi-core Real-Time Scheduling Algorithm. [Masters Thesis]. Virginia Tech; 2012. Available from: http://hdl.handle.net/10919/33541

Virginia Tech
6.
Chandan, Shridhar.
Discrete Event Simulation of Mobility and Spatio-Temporal Spectrum Demand.
Degree: MS, Computer Science and Applications, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/25331
► Realistic mobility and cellular traffic modeling is key to various wireless networking applications and have a significant impact on network performance. Planning and design, network…
(more)
▼ Realistic mobility and cellular traffic modeling is key to various wireless networking applications and have a significant impact on network performance. Planning and design, network resource allocation and performance evaluation in cellular networks require realistic traffic modeling. We propose a Discrete Event Simulation framework, Diamond - (Discrete Event Simulation of Mobility and Spatio-Temporal Spectrum Demand) to model and analyze realistic activity based mobility and spectrum demand patterns. The framework can be used for spatio-temporal estimation of load, in deciding location of a new base station, contingency planning, and estimating the resilience of the existing infrastructure. The novelty of this framework lies in its ability to capture a variety of complex, realistic and dynamically changing events effectively. Our initial results show that the framework can be instrumental in contingency planning and dynamic spectrum allocation.
Advisors/Committee Members: Vullikanti, Anil Kumar S. (committeechair), Marathe, Madhav Vishnu (committeechair), Marathe, Achla (committee member).
Subjects/Keywords: Wireless Communication Networks; Spectrum Demand; Discrete Event Simulation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chandan, S. (2014). Discrete Event Simulation of Mobility and Spatio-Temporal Spectrum Demand. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/25331
Chicago Manual of Style (16th Edition):
Chandan, Shridhar. “Discrete Event Simulation of Mobility and Spatio-Temporal Spectrum Demand.” 2014. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/25331.
MLA Handbook (7th Edition):
Chandan, Shridhar. “Discrete Event Simulation of Mobility and Spatio-Temporal Spectrum Demand.” 2014. Web. 22 Jan 2021.
Vancouver:
Chandan S. Discrete Event Simulation of Mobility and Spatio-Temporal Spectrum Demand. [Internet] [Masters thesis]. Virginia Tech; 2014. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/25331.
Council of Science Editors:
Chandan S. Discrete Event Simulation of Mobility and Spatio-Temporal Spectrum Demand. [Masters Thesis]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/25331

Virginia Tech
7.
Mishra, Gaurav.
Development of Person-Person Network and Interacting PTTS in EpiSimdemics.
Degree: MS, Computer Science and Applications, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/64160
► Communications over social media, telephone, email, text etc have emerged as an integral part of modern society and they are popularly used for the expression…
(more)
▼ Communications over social media, telephone, email, text etc have emerged as an integral part of modern society and they are popularly used for the expression of anger, anxiety, fear, agitation and opinion by the people. People'
s social interaction tend to increase dramatically during periods of epidemics, protest and calamities. Therefore, above mentioned communication channels plays an important role in the spread of infectious phenomenon, like rumors, fads and effects. These infectious phenomena alters people'
s behavior during disease epidemic [1][2].
Social contact networks and epidemics co-evolve [1][2]. The spread of a disease influences people'
s behavior which in turn changes their social contact network, thereby altering the disease spread itself. As a result, there is a need for modeling the spread of these infectious phenomena that lead to changes in behavior. Their propagation among population primarily depends on the social contact network. The nature of social contagion spread is very similar to the spread of any infectious disease as they are contagious in nature. To spread contagious disease requires direct exposure to an infectious agent, whereas social contagions can be spread using various communications media like social networking forums, phones, emails and tweets.
EpiSimdemics is an individual-based modeling environment. It uses a people-location bipartite graph as the underlying network [3]. In its current form, EpiSimdemics requires two people to interact at a location to model simulations. Thus, it cannot simulate the spread of social contagions that do not necessarily require the meeting of two agents at a location.
We enhance EpiSimdemics by incorporating Person-Person network, which can model communications between people that are not contact based such as communications over email, phone, text and tweet. This Person-Person network is used to model effects (social contagion) which induce behavioral changes in population and thus impacting the disease spread. The disease spread is modeled on Person-Location network. This leads to the scenario of two interacting networks: Person-Person network modeling social contagion and Person-Location modeling disease. Theoretically, there can be multiple such networks modeling various interacting phenomena.
We demonstrate the usefulness of this network by modeling and simulating two interacting PTTSs (probabilistic timed transition systems). To model disease epidemics, we have defined Disease Model and to model effects (social contagion), we have defined Fear Model. We show how these models influence each other by performing simulations on EpiSimdemics with interacting Disease and Fear Model. Therefore a model that does not include the affect adaptations on disease epidemics and vice-versa, fails to reflect the actual behavior of a society during disease epidemic spread. The addition of Person-Person network to EpiSimdemics will allow for a better understanding of the affect adaptions, which can include behavior changes in society during an epidemic…
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Vullikanti, Anil Kumar S. (committee member), Bisset, Keith R. (committee member).
Subjects/Keywords: EpiSimdemics; PTTS; Person-Person Network
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mishra, G. (2014). Development of Person-Person Network and Interacting PTTS in EpiSimdemics. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/64160
Chicago Manual of Style (16th Edition):
Mishra, Gaurav. “Development of Person-Person Network and Interacting PTTS in EpiSimdemics.” 2014. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/64160.
MLA Handbook (7th Edition):
Mishra, Gaurav. “Development of Person-Person Network and Interacting PTTS in EpiSimdemics.” 2014. Web. 22 Jan 2021.
Vancouver:
Mishra G. Development of Person-Person Network and Interacting PTTS in EpiSimdemics. [Internet] [Masters thesis]. Virginia Tech; 2014. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/64160.
Council of Science Editors:
Mishra G. Development of Person-Person Network and Interacting PTTS in EpiSimdemics. [Masters Thesis]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/64160

Virginia Tech
8.
Mahajan, Rutvij Sanjay.
Empirical Analysis of Algorithms for the k-Server and Online Bipartite Matching Problems.
Degree: MS, Computer Engineering, 2018, Virginia Tech
URL: http://hdl.handle.net/10919/96725
► The k–server problem is of significant importance to the theoretical computer science and the operations research community. In this problem, we are given k servers,…
(more)
▼ The k–server problem is of significant importance to the theoretical computer science and the operations research community. In this problem, we are given k servers, their initial locations and a sequence of n requests that arrive one at a time. All these locations are points from some metric space and the cost of serving a request is given by the distance between the location of the request and the current location of the server selected to process the request. We must immediately process the request by moving a server to the request location. The objective in this problem is to minimize the total distance traveled by the servers to process all the requests.
In this thesis, we present an empirical analysis of a new online algorithm for k-server problem. This algorithm maintains two solutions, online solution, and an approximately optimal offline solution. When a request arrives we update the offline solution and use this update to inform the online assignment. This algorithm is motivated by the Robust-Matching Algorithm [RMAlgorithm, Raghvendra, APPROX 2016] for the closely related online bipartite matching problem. We then give a comprehensive experimental analysis of this algorithm and also provide a graphical user interface which can be used to visualize execution instances of the algorithm. We also consider these problems under stochastic setting and implement a lookahead strategy on top of the new online algorithm.
Advisors/Committee Members: Vullikanti, Anil Kumar S. (committeechair), Raghvendra, Sharath (committee member), Tokekar, Pratap (committee member).
Subjects/Keywords: k-Server Problem; Work Function Algorithm; Bipartite Matching; Assignment Problem
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mahajan, R. S. (2018). Empirical Analysis of Algorithms for the k-Server and Online Bipartite Matching Problems. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/96725
Chicago Manual of Style (16th Edition):
Mahajan, Rutvij Sanjay. “Empirical Analysis of Algorithms for the k-Server and Online Bipartite Matching Problems.” 2018. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/96725.
MLA Handbook (7th Edition):
Mahajan, Rutvij Sanjay. “Empirical Analysis of Algorithms for the k-Server and Online Bipartite Matching Problems.” 2018. Web. 22 Jan 2021.
Vancouver:
Mahajan RS. Empirical Analysis of Algorithms for the k-Server and Online Bipartite Matching Problems. [Internet] [Masters thesis]. Virginia Tech; 2018. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/96725.
Council of Science Editors:
Mahajan RS. Empirical Analysis of Algorithms for the k-Server and Online Bipartite Matching Problems. [Masters Thesis]. Virginia Tech; 2018. Available from: http://hdl.handle.net/10919/96725

Virginia Tech
9.
Aji, Sudarshan Mandayam.
Estimating Reachability Set Sizes in Dynamic Graphs.
Degree: MS, Computer Science and Applications, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/49262
► Graphs are a commonly used abstraction for diverse kinds of interactions, e.g., on Twitter and Facebook. Different kinds of topological properties of such graphs are…
(more)
▼ Graphs are a commonly used abstraction for diverse kinds of interactions, e.g., on Twitter and Facebook. Different kinds of topological properties of such graphs are computed for gaining insights into their structure. Computing properties of large real networks is computationally very challenging. Further, most real world networks are dynamic, i.e., they change over time. Therefore there is a need for efficient dynamic algorithms that offer good space-time trade-offs. In this thesis we study the problem of computing the reachability set size of a vertex, which is a fundamental problem, with applications in databases and social networks. We develop the first Giraph based algorithms for different dynamic versions of these problems, which scale to graphs with millions of edges.
Advisors/Committee Members: Vullikanti, Anil Kumar S. (committeechair), Marathe, Madhav Vishnu (committee member), Bisset, Keith R. (committee member).
Subjects/Keywords: Algorithm; Dynamic graphs; Giraph; Graph framework
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Aji, S. M. (2014). Estimating Reachability Set Sizes in Dynamic Graphs. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/49262
Chicago Manual of Style (16th Edition):
Aji, Sudarshan Mandayam. “Estimating Reachability Set Sizes in Dynamic Graphs.” 2014. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/49262.
MLA Handbook (7th Edition):
Aji, Sudarshan Mandayam. “Estimating Reachability Set Sizes in Dynamic Graphs.” 2014. Web. 22 Jan 2021.
Vancouver:
Aji SM. Estimating Reachability Set Sizes in Dynamic Graphs. [Internet] [Masters thesis]. Virginia Tech; 2014. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/49262.
Council of Science Editors:
Aji SM. Estimating Reachability Set Sizes in Dynamic Graphs. [Masters Thesis]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/49262

Virginia Tech
10.
Parikh, Nidhi Kiranbhai.
Generating Random Graphs with Tunable Clustering Coefficient.
Degree: MS, Computer Science, 2011, Virginia Tech
URL: http://hdl.handle.net/10919/31591
► Most real-world networks exhibit a high clustering coefficientâ the probability that two neighbors of a node are also neighbors of each other. We propose four…
(more)
▼ Most real-world networks exhibit a high clustering coefficientâ the probability that two neighbors
of a node are also neighbors of each other. We propose four algorithms CONF-1, CONF-2,
THROW-1, and THROW-2 which are based on the configuration model and that take triangle degree
sequence (representing the number of triangles/corners at a node) and single-edge degree sequence
(representing the number of single-edges/stubs at a node) as input and generate a random graph
with a tunable clustering coefficient. We analyze them theoretically and empirically for the case of
a regular graph. CONF-1 and CONF-2 generate a random graph with the degree sequence and the
clustering coefficient anticipated from the input triangle and single-edge degree sequences. At each
time step, CONF-1 chooses each node for creating triangles or single edges with the same probability,
while CONF-2 chooses a node for creating triangles or single edge with a probability proportional
to their number of unconnected corners or unconnected stubs, respectively. Experimental results
match quite well with the anticipated clustering coefficient except for highly dense graphs, in which
case the experimental clustering coefficient is higher than the anticipated value. THROW-2 chooses
three distinct nodes for creating triangles and two distinct nodes for creating single edges, while
they need not be distinct for THROW-1. For THROW-1 and THROW-2, the degree sequence and the
clustering coefficient of the generated graph varies from the input. However, the expected degree
distribution, and the clustering coefficient of the generated graph can also be predicted using analytical
results. Experiments show that, for THROW-1 and THROW-2, the results match quite well with
the analytical results. Typically, only information about degree sequence or degree distribution is
available. We also propose an algorithm DEG that takes degree sequence and clustering coefficient
as input and generates a graph with the same properties. Experiments show results for DEG that
are quite similar to those for CONF-1 and CONF-2.
Advisors/Committee Members: Heath, Lenwood S. (committeechair), Vullikanti, Anil Kumar S. (committee member), Marathe, Madhav V. (committee member).
Subjects/Keywords: Clustering coefficient; complex networks; random graphs; algorithms
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Parikh, N. K. (2011). Generating Random Graphs with Tunable Clustering Coefficient. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/31591
Chicago Manual of Style (16th Edition):
Parikh, Nidhi Kiranbhai. “Generating Random Graphs with Tunable Clustering Coefficient.” 2011. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/31591.
MLA Handbook (7th Edition):
Parikh, Nidhi Kiranbhai. “Generating Random Graphs with Tunable Clustering Coefficient.” 2011. Web. 22 Jan 2021.
Vancouver:
Parikh NK. Generating Random Graphs with Tunable Clustering Coefficient. [Internet] [Masters thesis]. Virginia Tech; 2011. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/31591.
Council of Science Editors:
Parikh NK. Generating Random Graphs with Tunable Clustering Coefficient. [Masters Thesis]. Virginia Tech; 2011. Available from: http://hdl.handle.net/10919/31591

Virginia Tech
11.
Cho, Yong Ju.
Algorithms for Reconstructing and Reasoning about Chemical Reaction Networks.
Degree: PhD, Computer Science and Applications, 2013, Virginia Tech
URL: http://hdl.handle.net/10919/19244
► Recent advances in systems biology have uncovered detailed mechanisms of biological processes such as the cell cycle, circadian rhythms, and signaling pathways. These mechanisms are…
(more)
▼ Recent advances in systems biology have uncovered detailed mechanisms of biological processes such as the cell cycle, circadian rhythms, and signaling pathways. These mechanisms are modeled by chemical reaction networks (CRNs) which are typically simulated by converting to ordinary differential equations (ODEs), so that the goal is to closely reproduce the observed quantitative and qualitative behaviors of the modeled process. This thesis proposes two algorithmic problems related to the construction and comprehension of CRN models. The first problem focuses on reconstructing CRNs from given time series. Given multivariate time course data obtained by perturbing a given CRN, how can we systematically deduce the interconnections between the species of the network? We demonstrate how this problem can be modeled as, first, one of uncovering conditional independence relationships using buffering experiments and, second, of determining the properties of the individual chemical reactions. Experimental results demonstrate the effectiveness of our approach on both synthetic and real CRNs. The second problem this work focuses on is to aid in network comprehension, i.e., to understand the motifs underlying complex dynamical behaviors of CRNs. Specifically, we focus on bistability – an important dynamical property of a CRN – and propose algorithms to identify the core structures responsible for conferring bistability. The approach we take is to systematically infer the instability causing structures (ICSs) of a CRN and use machine learning techniques to relate properties of the CRN to the presence of such ICSs. This work has the potential to aid in not just network comprehension but also model simplification, by helping reduce the complexity of known bistable systems.
Advisors/Committee Members: Ramakrishnan, Naren (committeechair), Vullikanti, Anil Kumar S. (committee member), Murali, T. M. (committee member), Bevan, David R. (committee member), Cao, Yang (committee member).
Subjects/Keywords: Chemical reaction networks; bistability; data mining; time series modeling
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Cho, Y. J. (2013). Algorithms for Reconstructing and Reasoning about Chemical Reaction Networks. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/19244
Chicago Manual of Style (16th Edition):
Cho, Yong Ju. “Algorithms for Reconstructing and Reasoning about Chemical Reaction Networks.” 2013. Doctoral Dissertation, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/19244.
MLA Handbook (7th Edition):
Cho, Yong Ju. “Algorithms for Reconstructing and Reasoning about Chemical Reaction Networks.” 2013. Web. 22 Jan 2021.
Vancouver:
Cho YJ. Algorithms for Reconstructing and Reasoning about Chemical Reaction Networks. [Internet] [Doctoral dissertation]. Virginia Tech; 2013. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/19244.
Council of Science Editors:
Cho YJ. Algorithms for Reconstructing and Reasoning about Chemical Reaction Networks. [Doctoral Dissertation]. Virginia Tech; 2013. Available from: http://hdl.handle.net/10919/19244

Virginia Tech
12.
Cadena, Jose Eduardo.
Finding Interesting Subgraphs with Guarantees.
Degree: PhD, Computer Science and Applications, 2018, Virginia Tech
URL: http://hdl.handle.net/10919/81960
► Networks are a mathematical abstraction of the interactions between a set of entities, with extensive applications in social science, epidemiology, bioinformatics, and cybersecurity, among others.…
(more)
▼ Networks are a mathematical abstraction of the interactions between a set of entities, with extensive applications in social science, epidemiology, bioinformatics, and cybersecurity, among others. There are many fundamental problems when analyzing network data, such as anomaly detection, dense subgraph mining, motif finding, information diffusion, and epidemic spread. A common underlying task in all these problems is finding an "interesting subgraph"; that is, finding a part of the graph – usually small relative to the whole – that optimizes a score function and has some property of interest, such as connectivity or a minimum density.
Finding subgraphs that satisfy common constraints of interest, such as the ones above, is computationally hard in general, and state-of-the-art algorithms for many problems in network analysis are heuristic in nature. These methods are fast and usually easy to implement. However, they come with no theoretical guarantees on the quality of the solution, which makes it difficult to assess how the discovered subgraphs compare to an optimal solution, which in turn affects the data mining task at hand. For instance, in anomaly detection, solutions with low anomaly score lead to sub-optimal detection power. On the other end of the spectrum, there have been significant advances on approximation algorithms for these challenging graph problems in the theoretical computer science community. However, these algorithms tend to be slow, difficult to implement, and they do not scale to the large datasets that are common nowadays.
The goal of this dissertation is developing scalable algorithms with theoretical guarantees for various network analysis problems, where the underlying task is to find subgraphs with constraints. We find interesting subgraphs with guarantees by adapting techniques from parameterized complexity, convex optimization, and submodularity optimization. These techniques are well-known in the algorithm design literature, but they lead to slow and impractical algorithms. One unifying theme in the problems that we study is that our methods are scalable without sacrificing the theoretical guarantees of these algorithm design techniques. We accomplish this combination of scalability and rigorous bounds by exploiting properties of the problems we are trying to optimize, decomposing or compressing the input graph to a manageable size, and parallelization.
We consider problems on network analysis for both static and dynamic network models. And we illustrate the power of our methods in applications, such as public health, sensor data analysis, and event detection using social media data.
Advisors/Committee Members: Vullikanti, Anil Kumar S. (committeechair), Marathe, Madhav Vishnu (committee member), Lu, Chang Tien (committee member), Konjevod, Goran (committee member), Ramakrishnan, Naren (committee member).
Subjects/Keywords: Graph Mining; Data Mining; Graph Algorithms; Anomaly Detection; Finding Subgraphs; Parameterized Complexity; Distributed Algorithms
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Cadena, J. E. (2018). Finding Interesting Subgraphs with Guarantees. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/81960
Chicago Manual of Style (16th Edition):
Cadena, Jose Eduardo. “Finding Interesting Subgraphs with Guarantees.” 2018. Doctoral Dissertation, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/81960.
MLA Handbook (7th Edition):
Cadena, Jose Eduardo. “Finding Interesting Subgraphs with Guarantees.” 2018. Web. 22 Jan 2021.
Vancouver:
Cadena JE. Finding Interesting Subgraphs with Guarantees. [Internet] [Doctoral dissertation]. Virginia Tech; 2018. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/81960.
Council of Science Editors:
Cadena JE. Finding Interesting Subgraphs with Guarantees. [Doctoral Dissertation]. Virginia Tech; 2018. Available from: http://hdl.handle.net/10919/81960

Virginia Tech
13.
Chhabra, Meenal.
Studies in the Algorithmic Pricing of Information Goods and Services.
Degree: PhD, Computer Science and Applications, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/25874
► This thesis makes a contribution to the algorithmic pricing literature by proposing and analyzing techniques for automatically pricing digital and information goods in order to…
(more)
▼ This thesis makes a contribution to the algorithmic pricing literature by proposing and analyzing techniques for automatically pricing digital and information goods in order to maximize profit in different settings. We also consider the effect on social welfare when agents use these pricing algorithms. The digital goods considered in this thesis are electronic commodities that have zero marginal cost and unlimited supply e.g., iTunes apps. On the other hand, an information good is an entity that bridges the knowledge gap about a product between the consumer and the seller when the consumer cannot assess the utility of owning that product accurately e.g., Carfax provides vehicle history and can be used by a potential buyer of a vehicle to get information about the vehicle.
With the emergence of e-commerce, the customers are increasingly price sensitive and search for the best opportunies anywhere. It is almost impossible to manually adjust the prices with rapidly changing demand and competition. Moreover, online shopping platforms also enable sellers to change prices easily and quickly as opposed to updating price labels in brick and mortar stores so they can also experiment with different prices to maximize their revenue. Therefore, e-marketplaces have created a need for designing sophisticated practical algorithms for pricing. This need has evoked interest in algorithmic pricing in the computer science, economics, and operations research communities.
In this thesis, we seek solutions to the following two algorithmic pricing problems:
(1) In the first problem, a seller launches a new digital good (this good has unlimited supply and zero marginal cost) but is unaware of its demand in a posted-price setting (i.e., the seller quotes a price to a buyer, and the buyer makes a decision depending on her willingness to pay); we look at the question – how should the seller set the prices in order to maximize her infinite horizon discounted revenue? This is a classic problem of learning while earning. We propose a few algorithms for this problem and demonstrate their effectiveness using rigorous empirical tests on both synthetic datasets and real-world datasets from auctions at eBay and Yahoo!, and ratings on jokes from Jester, an online joke recommender system. We also show that under certain conditions the myopic Bayesian strategy is also Bayes-optimal. Moreover, this strategy has finite regret (independent of time) which means that it also learns very fast.
(2) The second problem is based on search markets: a consumer is searching for a product sequentially (i.e., she examines possible options one by one and on observing them decides whether to buy or not). However, merely observing a good, although partially informative, does not typically provide the potential purchaser with the complete information set necessary to execute her buying decision. This lack of perfect information about the good creates a market for intermediaries (we refer to them as experts) who can conduct research on behalf of the buyer and sell…
Advisors/Committee Members: Das, Sanmay (committeechair), Vullikanti, Anil Kumar S. (committeechair), Sarne, David (committee member), Ryzhov, Ilya O. (committee member), Ramakrishnan, Naren (committee member).
Subjects/Keywords: Non-linear pricing; Sequential Search; Algorithm pricing; Information goods; Dynamic pricing; Revenue maximization; Reinforcement learning; Search markets
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chhabra, M. (2014). Studies in the Algorithmic Pricing of Information Goods and Services. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/25874
Chicago Manual of Style (16th Edition):
Chhabra, Meenal. “Studies in the Algorithmic Pricing of Information Goods and Services.” 2014. Doctoral Dissertation, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/25874.
MLA Handbook (7th Edition):
Chhabra, Meenal. “Studies in the Algorithmic Pricing of Information Goods and Services.” 2014. Web. 22 Jan 2021.
Vancouver:
Chhabra M. Studies in the Algorithmic Pricing of Information Goods and Services. [Internet] [Doctoral dissertation]. Virginia Tech; 2014. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/25874.
Council of Science Editors:
Chhabra M. Studies in the Algorithmic Pricing of Information Goods and Services. [Doctoral Dissertation]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/25874
14.
Zhang, Yao.
Optimizing and Understanding Network Structure for Diffusion.
Degree: PhD, Computer Science and Applications, 2017, Virginia Tech
URL: http://hdl.handle.net/10919/79674
► Given a population contact network and electronic medical records of patients, how to distribute vaccines to individuals to effectively control a flu epidemic? Similarly, given…
(more)
▼ Given a population contact network and electronic medical records of patients, how to distribute vaccines to individuals to effectively control a flu epidemic? Similarly, given the Twitter following network and tweets, how to choose the best communities/groups to stop rumors from spreading? How to find the best accounts that bridge celebrities and ordinary users? These questions are related to diffusion (aka propagation) phenomena. Diffusion can be treated as a behavior of spreading contagions (like viruses, ideas, memes, etc.) on some underlying network. It is omnipresent in areas such as social media, public health, and cyber security. Examples include diseases like flu spreading on person-to-person contact networks, memes disseminating by online adoption over online friendship networks, and malware propagating among computer networks. When a contagion spreads, network structure (like nodes/edges/groups, etc.) plays a major role in determining the outcome. For instance, a rumor, if propagated by celebrities, can go viral. Similarly, an epidemic can die out quickly, if vulnerable demographic groups are successfully targeted for vaccination.
Hence in this thesis, we aim to optimize and understand network structure better in light of diffusion. We optimize graph topologies by removing nodes/edges for controlling rumors/viruses from spreading, and gain a deeper understanding of a network in terms of diffusion by exploring how nodes group together for similar roles of dissemination. We develop several novel graph mining algorithms, with different levels of granularity (node/edge level to group/community level), from model-driven and data-driven perspectives, focusing on topics like immunization on networks, graph summarization, and community detection. In contrast to previous work, we are the first to systematically develops more realistic, implementable and data-based graph algorithms to control contagions. In addition, our thesis is also the first work to use diffusion to effectively summarize graphs and understand communities/groups of networks in a general way.
1. Model-driven. Diffusion processes are usually described using mathematical models, e.g., the Independent Cascade (IC) model in social media, and the Susceptible-Infectious-Recovered (SIR) model in epidemiology. Given such models, we propose to optimize network structure for controlling propagation (the immunization problem) in several practical and implementable settings, taking into account the presence of infections, the uncertain nature of the data and group structure of the population. We develop efficient algorithms for different interventions, such as vaccination (node removal) and quarantining (edge removal). In addition, we study the graph coarsening problem for both static and temporal networks to obtain a better understanding of relations among nodes when a contagion is propagating. We seek to get a much smaller representation of a large network, while preserving its diffusive properties.
2. Data-driven. Model-driven approaches can provide…
Advisors/Committee Members: Prakash, Bodicherla Aditya (committeechair), Huang, Bert (committee member), Kumar, Ravi (committee member), Vullikanti, Anil Kumar S. (committee member), Ramakrishnan, Naren (committee member).
Subjects/Keywords: Data Mining; Graph/Network; Diffusion
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhang, Y. (2017). Optimizing and Understanding Network Structure for Diffusion. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/79674
Chicago Manual of Style (16th Edition):
Zhang, Yao. “Optimizing and Understanding Network Structure for Diffusion.” 2017. Doctoral Dissertation, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/79674.
MLA Handbook (7th Edition):
Zhang, Yao. “Optimizing and Understanding Network Structure for Diffusion.” 2017. Web. 22 Jan 2021.
Vancouver:
Zhang Y. Optimizing and Understanding Network Structure for Diffusion. [Internet] [Doctoral dissertation]. Virginia Tech; 2017. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/79674.
Council of Science Editors:
Zhang Y. Optimizing and Understanding Network Structure for Diffusion. [Doctoral Dissertation]. Virginia Tech; 2017. Available from: http://hdl.handle.net/10919/79674

Virginia Tech
15.
Poirel, Christopher L.
Bridging Methodological Gaps in Network-Based Systems Biology.
Degree: PhD, Computer Science and Applications, 2013, Virginia Tech
URL: http://hdl.handle.net/10919/23899
► Functioning of the living cell is controlled by a complex network of interactions among genes, proteins, and other molecules. A major goal of systems biology…
(more)
▼ Functioning of the living cell is controlled by a complex network of interactions among genes, proteins, and other molecules. A major goal of systems biology is to understand and explain the mechanisms by which these interactions govern the cell'
s response to various conditions. Molecular interaction networks have proven to be a powerful representation for studying cellular behavior. Numerous algorithms have been developed to unravel the complexity of these networks. Our work addresses the drawbacks of existing techniques. This thesis includes three related research efforts that introduce network-based approaches to bridge current methodological gaps in systems biology.
i. Functional enrichment methods provide a summary of biological functions that are overrepresented in an interesting collection of genes (e.g., highly differentially expressed genes between a diseased cell and a healthy cell). Standard functional enrichment algorithms ignore the known interactions among proteins. We propose a novel network-based approach to functional enrichment that explicitly accounts for these underlying molecular interactions. Through this work, we close the gap between set-based functional enrichment and topological analysis of molecular interaction networks.
ii. Many techniques have been developed to compute the response network of a cell. A recent trend in this area is to compute response networks of small size, with the rationale that only part of a pathway is often changed by disease and that interpreting small subnetworks is easier than interpreting larger ones. However, these methods may not uncover the spectrum of pathways perturbed in a particular experiment or disease. To avoid these difficulties, we propose to use algorithms that reconcile case-control DNA microarray data with a molecular interaction network by modifying per-gene differential expression p-values such that two genes connected by an interaction show similar changes in their gene expression values.
iii. Top-down analyses in systems biology can automatically find correlations among genes and proteins in large-scale datasets. However, it is often difficult to design experiments from these results. In contrast, bottom-up approaches painstakingly craft detailed models of cellular processes. However, developing the models is a manual process that can take many years. These approaches have largely been developed independently. We present Linker, an efficient and automated data-driven method that analyzes molecular interactomes. Linker combines teleporting random walks and k-shortest path computations to discover connections from a set of source proteins to a set of target proteins. We demonstrate the efficacy of Linker through two applications: proposing extensions to an existing model of cell cycle regulation in budding yeast and automated reconstruction of human signaling pathways. Linker achieves superior precision and recall compared to state-of-the-art algorithms from the literature.
Advisors/Committee Members: Murali, T. M. (committeechair), Vullikanti, Anil Kumar S. (committee member), Grama, Ananth (committee member), Tyson, John J. (committee member), Ramakrishnan, Naren (committee member).
Subjects/Keywords: Computational Biology; Functional Enrichment; Graph Theory; Network; Random Walk; Signaling Pathways; Top-Down Analysis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Poirel, C. L. (2013). Bridging Methodological Gaps in Network-Based Systems Biology. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/23899
Chicago Manual of Style (16th Edition):
Poirel, Christopher L. “Bridging Methodological Gaps in Network-Based Systems Biology.” 2013. Doctoral Dissertation, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/23899.
MLA Handbook (7th Edition):
Poirel, Christopher L. “Bridging Methodological Gaps in Network-Based Systems Biology.” 2013. Web. 22 Jan 2021.
Vancouver:
Poirel CL. Bridging Methodological Gaps in Network-Based Systems Biology. [Internet] [Doctoral dissertation]. Virginia Tech; 2013. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/23899.
Council of Science Editors:
Poirel CL. Bridging Methodological Gaps in Network-Based Systems Biology. [Doctoral Dissertation]. Virginia Tech; 2013. Available from: http://hdl.handle.net/10919/23899
16.
Parikh, Nidhi Kiranbhai.
Behavior Modeling and Analytics for Urban Computing: A Synthetic Information-based Approach.
Degree: PhD, Computer Science and Applications, 2017, Virginia Tech
URL: http://hdl.handle.net/10919/84967
► The rapid increase in urbanization poses challenges in diverse areas such as energy, transportation, pandemic planning, and disaster response. Planning for urbanization is a big…
(more)
▼ The rapid increase in urbanization poses challenges in diverse areas such as energy, transportation, pandemic planning, and disaster response. Planning for urbanization is a big challenge because cities are complex systems consisting of human populations, infrastructures, and interactions and interdependence among them. This dissertation focuses on a synthetic information-based approach for modeling human activities and behaviors for two urban science applications, epidemiology and disaster planning, and with associated analytics. Synthetic information is a data-driven approach to create a detailed, high fidelity representation of human populations, infrastructural systems and their behavioral and interaction aspects. It is used in developing large-scale simulations to model what-if scenarios and for policy making.
Big cities have a large number of visitors visiting them every day. They often visit crowded areas in the city and come into contact with each other and the area residents. However, most epidemiological studies have ignored their role in spreading epidemics. We extend the synthetic population model of the Washington DC metro area to include transient populations, consisting of tourists and business travelers, along with their demographics and activities, by combining data from multiple sources. We evaluate the effect of including this population in epidemic forecasts, and the potential benefits of multiple interventions that target transients.
In the next study, we model human behavior in the aftermath of the detonation of an improvised nuclear device in Washington DC. Previous studies of this scenario have mostly focused on modeling physical impact and simple behaviors like sheltering and evacuation. However, these models have focused on optimal behavior, not naturalistic behavior. In other words, prior work is focused on whether it is better to shelter-in-place or evacuate, but has not been informed by the literature on what people actually do in the aftermath of disasters. Natural human behaviors in disasters, such as looking for family members or seeking healthcare, are supported by infrastructures such as cell-phone communication and transportation systems. We model a range of behaviors such as looking for family members, evacuation, sheltering, healthcare-seeking, worry, and search and rescue and their interactions with infrastructural systems.
Large-scale and complex agent-based simulations generate a large amount of data in each run of the simulation, making it hard to make sense of results. This leads us to formulate two new problems in simulation analytics. First, we develop algorithms to summarize simulation results by extracting causally-relevant state sequences - state sequences that have a measurable effect on the outcome of interest. Second, in order to develop effective interventions, it is important to understand which behaviors lead to positive and negative outcomes. It may happen that the same behavior may lead to different outcomes, depending upon the context. Hence, we develop an…
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Swarup, Samarth (committeechair), Vullikanti, Anil Kumar S. (committee member), Sukthankar, Gita Reese (committee member), Ramakrishnan, Naren (committee member).
Subjects/Keywords: Behavior Modeling; Simulation Analytics; Social Simulations; Synthetic Information; Transient Population; Urban Computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Parikh, N. K. (2017). Behavior Modeling and Analytics for Urban Computing: A Synthetic Information-based Approach. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/84967
Chicago Manual of Style (16th Edition):
Parikh, Nidhi Kiranbhai. “Behavior Modeling and Analytics for Urban Computing: A Synthetic Information-based Approach.” 2017. Doctoral Dissertation, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/84967.
MLA Handbook (7th Edition):
Parikh, Nidhi Kiranbhai. “Behavior Modeling and Analytics for Urban Computing: A Synthetic Information-based Approach.” 2017. Web. 22 Jan 2021.
Vancouver:
Parikh NK. Behavior Modeling and Analytics for Urban Computing: A Synthetic Information-based Approach. [Internet] [Doctoral dissertation]. Virginia Tech; 2017. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/84967.
Council of Science Editors:
Parikh NK. Behavior Modeling and Analytics for Urban Computing: A Synthetic Information-based Approach. [Doctoral Dissertation]. Virginia Tech; 2017. Available from: http://hdl.handle.net/10919/84967

Virginia Tech
17.
Tuli, Gaurav.
Modeling and Twitter-based Surveillance of Smoking Contagion.
Degree: PhD, Computer Science and Applications, 2016, Virginia Tech
URL: http://hdl.handle.net/10919/64426
► Nicotine, in the form of cigarette smoking, chewing tobacco, and most recently as vapor smoking, is one of the most heavily used addictive drugs in…
(more)
▼ Nicotine, in the form of cigarette smoking, chewing tobacco, and most recently as vapor smoking, is one of the most heavily used addictive drugs in the world. Since smoking imposes a significant health-care and economic burden on the population, there have been sustained and significant efforts for the past several decades to control it. However, smoking epidemic is a complex and "policy-resistant" problem that has proven difficult to control. Despite the known importance of social networks in the smoking epidemic, there has been no network-centric intervention available for controlling the smoking epidemic yet.
The long-term goal of this work is the development and implementation of an environment needed for developing network-centric interventions for controlling the smoking contagion. In order to develop such an environment we essentially need: an operationalized model of smoking that can be simulated, to determine the role of online social networks on smoking behavior, and actual methods to perform network-centric interventions. The objective of this thesis is to take first steps in all these categories. We perform Twitter-based surveillance of smoking-related tweets, and use mathematical modeling and simulation techniques to achieve our objective.
Specifically, we use Twitter data to infer sentiments on smoking and electronic cigarettes, to estimate the proportion of user population that gets exposed to smoking-related messaging that is underage, and to identify statistically anomalous clusters of counties where people discuss about electronic cigarette a lot more than expected. In other work, we employ mathematical modeling and simulation approach to study how different factors such as addictiveness and peer-influence together contribute to smoking behavior diffusion, and also develop two methods to stymie social contagion. This lead to a total of four smoking contagion-related studies. These studies are just a first step towards the development of a network-centric intervention environment for controlling smoking contagion, and also to show that such an environment is realizable.
Advisors/Committee Members: Swarup, Samarth (committeechair), Marathe, Madhav Vishnu (committeechair), Ramakrishnan, Naren (committee member), Lakkaraju, Kiran (committee member), Vullikanti, Anil Kumar S. (committee member).
Subjects/Keywords: Tobacco Epidemic; Twitter-based Surveillance; Smoking-related Messaging; Electronic-cigarette; Networks; Control of Contagion Pro cesses; Modeling and Simulation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tuli, G. (2016). Modeling and Twitter-based Surveillance of Smoking Contagion. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/64426
Chicago Manual of Style (16th Edition):
Tuli, Gaurav. “Modeling and Twitter-based Surveillance of Smoking Contagion.” 2016. Doctoral Dissertation, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/64426.
MLA Handbook (7th Edition):
Tuli, Gaurav. “Modeling and Twitter-based Surveillance of Smoking Contagion.” 2016. Web. 22 Jan 2021.
Vancouver:
Tuli G. Modeling and Twitter-based Surveillance of Smoking Contagion. [Internet] [Doctoral dissertation]. Virginia Tech; 2016. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/64426.
Council of Science Editors:
Tuli G. Modeling and Twitter-based Surveillance of Smoking Contagion. [Doctoral Dissertation]. Virginia Tech; 2016. Available from: http://hdl.handle.net/10919/64426

Virginia Tech
18.
Pei, Guanhong.
Distributed Scheduling and Delay-Throughput Optimization in Wireless Networks under the Physical Interference Model.
Degree: PhD, Electrical Engineering, 2013, Virginia Tech
URL: http://hdl.handle.net/10919/19219
► We investigate diverse aspects of the performance of wireless networks, including throughput, delay and distributed complexity. One of the main challenges for optimizing them arises…
(more)
▼ We investigate diverse aspects of the performance of wireless networks, including throughput, delay and distributed complexity. One of the main challenges for optimizing them arises from radio interference, an inherent factor in wireless networks. Graph-based interference models represent a large class of interference models widely used for the study of wireless networks, and suffer from the weakness of over-simplifying the interference caused by wireless signals in a local and binary way. A more sophisticated interference model, the physical interference model, based on SINR constraints, is considered more realistic but is more challenging to study (because of its non-linear form and non-local property). In this dissertation, we study the connections between the two types of interference models – graph-based and physical interference models – and tackle a set of fundamental problems under the physical interference model; previously, some of the problems were still open even under the graph-based interference model, and to those we have provided solutions under both types of interference models. The underlying interference models affect scheduling and power control – essential building blocks in the operation of wireless networks – that directly deal with the wireless medium; the physical interference model (compared to graph-based interference model) compounds the problem of efficient scheduling and power control by making it non-local and non-linear. The system performance optimization and tradeoffs with respect to throughput and delay require a ``global\'v́iew across transport, network, media access control (MAC), physical layers (referred to as cross-layer optimization) to take advantage of the control planes in different levels of the wireless network protocol stack. This can be achieved by regulating traffic rates, finding traffic flow paths for end-to-end sessions, controlling the access to the wireless medium (or channels), assigning the transmission power, and handling signal reception under interference. The theme of the dissertation is distributed algorithms and optimization of QoS objectives under the physical interference model. We start by developing the first low-complexity distributed scheduling and power control algorithms for maximizing the efficiency ratio for different interference models; we derive end-to-end per-flow delay upper-bounds for our scheduling algorithms and our delay upper-bounds are the first network-size-independent result known for multihop traffic. Based on that, we design the first cross-layer multi-commodity optimization frameworks for delay-constrained throughput maximization by incorporating the routing and traffic control into the problem scope. Scheduling and power control is also inherent to distributed computing of ``global problems\'\', e.g., the maximum independent set problems in terms of transmitting links and local broadcasts respectively, and…
Advisors/Committee Members: Vullikanti, Anil Kumar S. (committeechair), Ravindran, Binoy (committee member), Srinivasan, Aravind (committee member), Hou, Yiwei Thomas (committee member), Marathe, Madhav Vishnu (committee member).
Subjects/Keywords: Wireless Networks; Cross-layer Design; Physical Interference; Approximation Algorithms; Distributed Algorithms
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Pei, G. (2013). Distributed Scheduling and Delay-Throughput Optimization in Wireless Networks under the Physical Interference Model. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/19219
Chicago Manual of Style (16th Edition):
Pei, Guanhong. “Distributed Scheduling and Delay-Throughput Optimization in Wireless Networks under the Physical Interference Model.” 2013. Doctoral Dissertation, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/19219.
MLA Handbook (7th Edition):
Pei, Guanhong. “Distributed Scheduling and Delay-Throughput Optimization in Wireless Networks under the Physical Interference Model.” 2013. Web. 22 Jan 2021.
Vancouver:
Pei G. Distributed Scheduling and Delay-Throughput Optimization in Wireless Networks under the Physical Interference Model. [Internet] [Doctoral dissertation]. Virginia Tech; 2013. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/19219.
Council of Science Editors:
Pei G. Distributed Scheduling and Delay-Throughput Optimization in Wireless Networks under the Physical Interference Model. [Doctoral Dissertation]. Virginia Tech; 2013. Available from: http://hdl.handle.net/10919/19219

Virginia Tech
19.
Kuhlman, Christopher J.
High Performance Computational Social Science Modeling of Networked Populations.
Degree: PhD, Computer Science and Applications, 2013, Virginia Tech
URL: http://hdl.handle.net/10919/51175
► Dynamics of social processes in populations, such as the spread of emotions, influence, opinions, and mass movements (often referred to individually and collectively as contagions),…
(more)
▼ Dynamics of social processes in populations, such as the spread of emotions, influence, opinions, and mass movements (often referred to individually and collectively as contagions), are increasingly studied because of their economic, social, and political impacts. Moreover, multiple contagions may interact and hence studying their simultaneous evolution is important. Within the context of social media, large datasets involving many tens of millions of people are leading to new insights into human behavior, and these datasets continue to grow in size. Through social media, contagions can readily cross national boundaries, as evidenced by the 2011 Arab Spring. These and other observations guide our work. Our goal is to study contagion processes at scale with an approach that permits intricate descriptions of interactions among members of a population. Our contributions are a modeling environment to perform these computations and a set of approaches to predict contagion spread size and to block the spread of contagions. Since we represent populations as networks, we also provide insights into network structure effects, and present and analyze a new model of contagion dynamics that represents a person\'
s behavior in repeatedly joining and withdrawing from collective action. We study variants of problems for different classes of social contagions, including those known as simple and complex contagions.
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Tilevich, Eli (committee member), Ravi, Sekharipuram (committee member), Mortveit, Henning S. (committee member), Vullikanti, Anil Kumar S. (committee member).
Subjects/Keywords: Social behavior; Contagions; Networks; Control of contagion processes; Graph dynamical systems; Modeling and simulation; Rapid d
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kuhlman, C. J. (2013). High Performance Computational Social Science Modeling of Networked Populations. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/51175
Chicago Manual of Style (16th Edition):
Kuhlman, Christopher J. “High Performance Computational Social Science Modeling of Networked Populations.” 2013. Doctoral Dissertation, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/51175.
MLA Handbook (7th Edition):
Kuhlman, Christopher J. “High Performance Computational Social Science Modeling of Networked Populations.” 2013. Web. 22 Jan 2021.
Vancouver:
Kuhlman CJ. High Performance Computational Social Science Modeling of Networked Populations. [Internet] [Doctoral dissertation]. Virginia Tech; 2013. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/51175.
Council of Science Editors:
Kuhlman CJ. High Performance Computational Social Science Modeling of Networked Populations. [Doctoral Dissertation]. Virginia Tech; 2013. Available from: http://hdl.handle.net/10919/51175
20.
Hamid, Tania.
On the Feasibility of MapReduce to Compute Phase Space Properties of Graphical Dynamical Systems: An Empirical Study.
Degree: MS, Computer Science and Applications, 2015, Virginia Tech
URL: http://hdl.handle.net/10919/54546
► A graph dynamical system (GDS) is a theoretical construct that can be used to simulate and analyze the dynamics of a wide spectrum of real…
(more)
▼ A graph dynamical system (GDS) is a theoretical construct that can be used to simulate and analyze the dynamics of a wide spectrum of real world processes which can be modeled as networked systems. One of our goals is to compute the phase space of a system, and for this, even 30-vertex graphs present a computational challenge. This is because the number of state transitions needed to compute the phase space is exponential in the number of graph vertices. These problems thus produce memory and execution speed challenges. To address this, we devise various MapReduce programming paradigms that can be used to characterize system state transitions, compute phase spaces, functional equivalence classes, dynamic equivalence classes and cycle equivalence classes of dynamical systems. We also evaluate these paradigms and analyze their suitability for modeling different GDSs.
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Kuhlman, Christopher James (committeechair), Ribbens, Calvin J. (committee member), Vullikanti, Anil Kumar S. (committee member).
Subjects/Keywords: Graph Dynamical Systems; GDS; MapReduce; Map; Reduce; Hadoop
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hamid, T. (2015). On the Feasibility of MapReduce to Compute Phase Space Properties of Graphical Dynamical Systems: An Empirical Study. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/54546
Chicago Manual of Style (16th Edition):
Hamid, Tania. “On the Feasibility of MapReduce to Compute Phase Space Properties of Graphical Dynamical Systems: An Empirical Study.” 2015. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/54546.
MLA Handbook (7th Edition):
Hamid, Tania. “On the Feasibility of MapReduce to Compute Phase Space Properties of Graphical Dynamical Systems: An Empirical Study.” 2015. Web. 22 Jan 2021.
Vancouver:
Hamid T. On the Feasibility of MapReduce to Compute Phase Space Properties of Graphical Dynamical Systems: An Empirical Study. [Internet] [Masters thesis]. Virginia Tech; 2015. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/54546.
Council of Science Editors:
Hamid T. On the Feasibility of MapReduce to Compute Phase Space Properties of Graphical Dynamical Systems: An Empirical Study. [Masters Thesis]. Virginia Tech; 2015. Available from: http://hdl.handle.net/10919/54546

Virginia Tech
21.
Maloo, Akshay.
Dynamic Behavior Visualizer: A Dynamic Visual Analytics Framework for Understanding Complex Networked Models.
Degree: MS, Computer Science and Applications, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/25296
► Dynamic Behavior Visualizer (DBV) is a visual analytics environment to visualize the spatial and temporal movements and behavioral changes of an individual or a group,…
(more)
▼ Dynamic Behavior Visualizer (DBV) is a visual analytics environment to visualize the spatial and temporal movements and behavioral changes of an individual or a group, e.g. family within a realistic urban environment. DBV is specifically designed to visualize the adaptive behavioral changes, as they pertain to the interactions with multiple inter-dependent infrastructures, in the aftermath of a large crisis, e.g. hurricane or the detonation of an improvised nuclear device. DBV is web-enabled and thus is easily accessible to any user with access to a web browser. A novel aspect of the system is its scale and fidelity. The goal of DBV is to synthesize information and derive insight from it; detect the expected and discover the unexpected; provide timely and easily understandable assessment and the ability to piece together all this information.
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Eubank, Stephen G. (committee member), Vullikanti, Anil Kumar S. (committee member), Xie, Dawen (committee member).
Subjects/Keywords: Information Visualization; Visual Analytics; Data Modeling; Networked Models
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Maloo, A. (2014). Dynamic Behavior Visualizer: A Dynamic Visual Analytics Framework for Understanding Complex Networked Models. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/25296
Chicago Manual of Style (16th Edition):
Maloo, Akshay. “Dynamic Behavior Visualizer: A Dynamic Visual Analytics Framework for Understanding Complex Networked Models.” 2014. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/25296.
MLA Handbook (7th Edition):
Maloo, Akshay. “Dynamic Behavior Visualizer: A Dynamic Visual Analytics Framework for Understanding Complex Networked Models.” 2014. Web. 22 Jan 2021.
Vancouver:
Maloo A. Dynamic Behavior Visualizer: A Dynamic Visual Analytics Framework for Understanding Complex Networked Models. [Internet] [Masters thesis]. Virginia Tech; 2014. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/25296.
Council of Science Editors:
Maloo A. Dynamic Behavior Visualizer: A Dynamic Visual Analytics Framework for Understanding Complex Networked Models. [Masters Thesis]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/25296

Virginia Tech
22.
Kannan, Vijayasarathy.
A Distributed Approach to EpiFast using Apache Spark.
Degree: MS, Computer Science and Applications, 2015, Virginia Tech
URL: http://hdl.handle.net/10919/55272
► EpiFast is a parallel algorithm for large-scale epidemic simulations, based on an interpretation of the stochastic disease propagation in a contact network. The original EpiFast…
(more)
▼ EpiFast is a parallel algorithm for large-scale epidemic simulations, based on an interpretation of the stochastic disease propagation in a contact network. The original EpiFast
implementation is based on a master-slave computation model with a focus on distributed
memory using message-passing-interface (MPI). However, it suffers from few shortcomings
with respect to scale of networks being studied. This thesis addresses these shortcomings
and provides two different implementations: Spark-EpiFast based on the Apache Spark big
data processing engine and Charm-EpiFast based on the Charm++ parallel programming
framework. The study focuses on exploiting features of both systems that we believe could
potentially benefit in terms of performance and scalability. We present models of EpiFast
specific to each system and relate algorithm specifics to several optimization techniques. We
also provide a detailed analysis of these optimizations through a range of experiments that
consider scale of networks and environment settings we used. Our analysis shows that the
Spark-based version is more efficient than the Charm++ and MPI-based counterparts. To
the best of our knowledge, ours is one of the preliminary efforts of using Apache Spark for
epidemic simulations. We believe that our proposed model could act as a reference for similar
large-scale epidemiological simulations exploring non-MPI or MapReduce-like approaches.
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Marathe, Achla (committee member), Vullikanti, Anil Kumar S. (committee member), Chen, Jiangzhuo (committee member).
Subjects/Keywords: computational epidemiology; parallel programming; distributed computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kannan, V. (2015). A Distributed Approach to EpiFast using Apache Spark. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/55272
Chicago Manual of Style (16th Edition):
Kannan, Vijayasarathy. “A Distributed Approach to EpiFast using Apache Spark.” 2015. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/55272.
MLA Handbook (7th Edition):
Kannan, Vijayasarathy. “A Distributed Approach to EpiFast using Apache Spark.” 2015. Web. 22 Jan 2021.
Vancouver:
Kannan V. A Distributed Approach to EpiFast using Apache Spark. [Internet] [Masters thesis]. Virginia Tech; 2015. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/55272.
Council of Science Editors:
Kannan V. A Distributed Approach to EpiFast using Apache Spark. [Masters Thesis]. Virginia Tech; 2015. Available from: http://hdl.handle.net/10919/55272

Virginia Tech
23.
Soundarapandian, Manikandan.
Relational Computing Using HPC Resources: Services and Optimizations.
Degree: MS, Computer Science and Applications, 2015, Virginia Tech
URL: http://hdl.handle.net/10919/56586
► Computational epidemiology involves processing, analysing and managing large volumes of data. Such massive datasets cannot be handled efficiently by using traditional standalone database management systems,…
(more)
▼ Computational epidemiology involves processing, analysing and managing large volumes of data. Such massive datasets cannot be handled efficiently by using traditional standalone database management systems, owing to their limitation in the degree of computational efficiency and bandwidth to scale to large volumes of data. In this thesis, we address management and processing of large volumes of data for modeling, simulation and analysis in epidemiological studies. Traditionally, compute intensive tasks are processed using high performance computing resources and supercomputers whereas data intensive tasks are delegated to standalone databases and some custom programs. DiceX framework is a one-stop solution for distributed database management and processing and its main mission is to leverage and utilize supercomputing resources for data intensive computing, in particular relational data processing.
While standalone databases are always on and a user can submit queries at any time for required results, supercomputing resources must be acquired and are available for a limited time period. These resources are relinquished either upon completion of execution or at the expiration of the allocated time period. This kind of reservation based usage style poses critical challenges, including building and launching a distributed data engine onto the supercomputer, saving the engine and resuming from the saved image, devising efficient optimization upgrades to the data engine and enabling other applications to seamlessly access the engine . These challenges and requirements cause us to align our approach more closely with cloud computing paradigms of Infrastructure as a Service(IaaS) and Platform as a Service(PaaS). In this thesis, we propose cloud computing like workflows, but using supercomputing resources to manage and process relational data intensive tasks. We propose and implement several services including database freeze and migrate and resume, ad-hoc resource addition and table redistribution. These services assist in carrying out the workflows defined.
We also propose an optimization upgrade to the query planning module of postgres-XC, the core relational data processing engine of the DiceX framework. With a knowledge of domain semantics, we have devised a more robust data distribution strategy that would enable to push down most time consuming sql operations forcefully to the postgres-XC data nodes, bypassing its query planner'
s default shippability criteria without compromising correctness. Forcing query push down reduces the query processing time by a factor of almost 40%-60% for certain complex spatio-temporal queries on our epidemiology datasets.
As part of this work, a generic broker service has also been implemented, which acts as an interface to the DiceX framework by exposing restful apis, which applications can make use of to query and retrieve results irrespective of the programming language or environment.
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Bisset, Keith R. (committee member), Chen, Jiangzhuo (committee member), Vullikanti, Anil Kumar S. (committee member), Gupta, Sandeep (committee member).
Subjects/Keywords: distributed databases; HPC; supercomputers; computational epidemiology
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Soundarapandian, M. (2015). Relational Computing Using HPC Resources: Services and Optimizations. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/56586
Chicago Manual of Style (16th Edition):
Soundarapandian, Manikandan. “Relational Computing Using HPC Resources: Services and Optimizations.” 2015. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/56586.
MLA Handbook (7th Edition):
Soundarapandian, Manikandan. “Relational Computing Using HPC Resources: Services and Optimizations.” 2015. Web. 22 Jan 2021.
Vancouver:
Soundarapandian M. Relational Computing Using HPC Resources: Services and Optimizations. [Internet] [Masters thesis]. Virginia Tech; 2015. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/56586.
Council of Science Editors:
Soundarapandian M. Relational Computing Using HPC Resources: Services and Optimizations. [Masters Thesis]. Virginia Tech; 2015. Available from: http://hdl.handle.net/10919/56586

Virginia Tech
24.
Subbiah, Rajesh.
An activity-based energy demand modeling framework for buildings: A bottom-up approach.
Degree: MS, Computer Science and Applications, 2013, Virginia Tech
URL: http://hdl.handle.net/10919/23084
► Energy consumption by buildings, due to various factors such as temperature regulation, lighting, poses a threat to our environment and energy resources. In the United…
(more)
▼ Energy consumption by buildings, due to various factors such as temperature regulation, lighting, poses a threat to our environment and energy resources. In the United States, statistics reveal that commercial and residential buildings combined contribute about 40 percent of the overall energy consumption, and this figure is expected to increase. In order to manage the growing demand for energy, there is a need for energy system optimization, which would require a realistic, high-resolution energy-demand model. In this work, we investigate and model the energy consumption of buildings by taking into account physical, structural, economic, and social factors that influence energy use. We propose a novel activity based modeling framework that generates an energy demand profile on a regular basis for a given nominal day. We use this information to generate a building-level energy demand profile at highly dis-aggregated level. We then investigate the different possible uses of generated demand profiles in different What-if scenarios like urban-area planning, demand-side management, demand sensitive pricing, etc. We also provide a novel way to resolve correlational and consistency problems in the generation of individual-level and building-level "shared" activities which occur due to individualsínteractions.
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Lum, Kristian (committee member), Marathe, Achla (committee member), Vullikanti, Anil Kumar S. (committee member).
Subjects/Keywords: energy; buildings; activity; regression models; smart grid; demand
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Subbiah, R. (2013). An activity-based energy demand modeling framework for buildings: A bottom-up approach. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/23084
Chicago Manual of Style (16th Edition):
Subbiah, Rajesh. “An activity-based energy demand modeling framework for buildings: A bottom-up approach.” 2013. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/23084.
MLA Handbook (7th Edition):
Subbiah, Rajesh. “An activity-based energy demand modeling framework for buildings: A bottom-up approach.” 2013. Web. 22 Jan 2021.
Vancouver:
Subbiah R. An activity-based energy demand modeling framework for buildings: A bottom-up approach. [Internet] [Masters thesis]. Virginia Tech; 2013. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/23084.
Council of Science Editors:
Subbiah R. An activity-based energy demand modeling framework for buildings: A bottom-up approach. [Masters Thesis]. Virginia Tech; 2013. Available from: http://hdl.handle.net/10919/23084

Virginia Tech
25.
Singh, Meghendra.
Human Behavior Modeling and Calibration in Epidemic Simulations.
Degree: MS, Computer Science and Applications, 2019, Virginia Tech
URL: http://hdl.handle.net/10919/87050
► In the real world, individuals can decide to adopt certain behaviors that reduce their chances of contracting a disease. For example, using hand sanitizers can…
(more)
▼ In the real world, individuals can decide to adopt certain behaviors that reduce their chances of contracting a disease. For example, using hand sanitizers can reduce an individual‘
s chances of getting infected by influenza. These behavioral decisions, when taken by many individuals in the population, can completely change the course of the disease. Such behavioral decision-making is generally not considered during in-silico simulations of infectious diseases. In this thesis, we address this problem by developing a methodology to create and calibrate a decision making model that can be used by agents (i.e., synthetic representations of humans in simulations) in a data driven way. Our method also finds a cost associated with such behaviors and matches the distribution of behavior observed in the real world with that observed in a survey. Our approach is a data-driven way of incorporating decision making for agents in large-scale epidemic simulations.
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Swarup, Samarth (committeechair), Vullikanti, Anil Kumar S. (committee member), Mitra, Tanushree (committee member).
Subjects/Keywords: Human behavior modeling; Agent based simulation; Markov decision processes
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Singh, M. (2019). Human Behavior Modeling and Calibration in Epidemic Simulations. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/87050
Chicago Manual of Style (16th Edition):
Singh, Meghendra. “Human Behavior Modeling and Calibration in Epidemic Simulations.” 2019. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/87050.
MLA Handbook (7th Edition):
Singh, Meghendra. “Human Behavior Modeling and Calibration in Epidemic Simulations.” 2019. Web. 22 Jan 2021.
Vancouver:
Singh M. Human Behavior Modeling and Calibration in Epidemic Simulations. [Internet] [Masters thesis]. Virginia Tech; 2019. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/87050.
Council of Science Editors:
Singh M. Human Behavior Modeling and Calibration in Epidemic Simulations. [Masters Thesis]. Virginia Tech; 2019. Available from: http://hdl.handle.net/10919/87050

Virginia Tech
26.
Zhang, Bo.
Supporting Software Transactional Memory in Distributed Systems: Protocols for Cache-Coherence, Conflict Resolution and Replication.
Degree: PhD, Electrical and Computer Engineering, 2011, Virginia Tech
URL: http://hdl.handle.net/10919/29571
► Lock-based synchronization on multiprocessors is inherently non-scalable, non-composable, and error-prone. These problems are exacerbated in distributed systems due to an additional layer of complexity: multinode…
(more)
▼ Lock-based synchronization on multiprocessors is inherently non-scalable, non-composable, and error-prone. These problems are exacerbated in distributed systems due to an additional layer of complexity: multinode concurrency. Transactional memory (TM) is an emerging, alternative synchronization abstraction that promises to alleviate these difficulties. With the TM model, code that accesses shared memory objects are organized as transactions, which speculatively execute, while logging changes. If transactional conflicts are detected, one of the conflicting transaction is aborted and re-executed, while the other is allowed to commit, yielding the illusion of atomicity. TM for multiprocessors has been proposed in software (STM), in hardware (HTM), and in a combination (HyTM).
This dissertation focuses on supporting the TM abstraction in distributed systems, i.e., distributed STM (or D-STM). We focus on three problem spaces: cache-coherence (CC), conflict resolution, and replication. We evaluate the performance of D-STM by measuring the competitive ratio of its makespan – i.e., the ratio of its makespan (the last completion time for a given set of transactions) to the makespan of an optimal off-line clairvoyant scheduler. We show that the performance of D-STM for metric-space networks is O(N
2) for N transactions requesting an object under the Greedy contention manager and an arbitrary CC protocol. To improve the performance, we propose a class of location-aware CC protocols, called LAC protocols.
We show that the combination of the Greedy manager and a LAC protocol yields an O(NlogN
s) competitive ratio for
s shared objects.
We then formalize two classes of CC protocols: distributed queuing cache-coherence (DQCC) protocols and distributed priority queuing cache-coherence (DPQCC) protocols, both of which can be implemented using distributed queuing protocols. We show that a DQCC protocol is O(NlogD)-competitive and a DPQCC protocol is O(log D
delta)-competitive for N dynamically generated transactions requesting an object, where D
delta is the normalized diameter of the underlying distributed queuing protocol. Additionally, we propose a novel CC protocol, called Relay, which reduces the total number of aborts to O(N) for N conflicting transactions requesting an object, yielding a significantly improvement over past CC protocols which has O(N
2) total number of aborts. We also analyze Relay'
s dynamic competitive ratio in terms of the communication cost (for dynamically generated transactions), and show that Relay'
s dynamic competitive ratio is O(log D
0), where D
0 is the normalized diameter of the underlying network spanning tree.
To reduce unnecessary aborts and increase concurrency for D-STM based on globally-consistent contention management policies, we propose the distributed dependency-aware (DDA) conflict resolution model, which adopts different conflict resolution strategies based on transaction types. In the DDA model, read-only transactions never abort by keeping a set of versions for each object. Each…
Advisors/Committee Members: Ravindran, Binoy (committeechair), Broadwater, Robert P. (committee member), Plassmann, Paul E. (committee member), Vullikanti, Anil Kumar S. (committee member), Yang, Yaling (committee member).
Subjects/Keywords: Cache-Coherence; Distributed Queuing; Contention Management; Software Transactional Memory; Replication; Quorum System
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhang, B. (2011). Supporting Software Transactional Memory in Distributed Systems: Protocols for Cache-Coherence, Conflict Resolution and Replication. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/29571
Chicago Manual of Style (16th Edition):
Zhang, Bo. “Supporting Software Transactional Memory in Distributed Systems: Protocols for Cache-Coherence, Conflict Resolution and Replication.” 2011. Doctoral Dissertation, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/29571.
MLA Handbook (7th Edition):
Zhang, Bo. “Supporting Software Transactional Memory in Distributed Systems: Protocols for Cache-Coherence, Conflict Resolution and Replication.” 2011. Web. 22 Jan 2021.
Vancouver:
Zhang B. Supporting Software Transactional Memory in Distributed Systems: Protocols for Cache-Coherence, Conflict Resolution and Replication. [Internet] [Doctoral dissertation]. Virginia Tech; 2011. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/29571.
Council of Science Editors:
Zhang B. Supporting Software Transactional Memory in Distributed Systems: Protocols for Cache-Coherence, Conflict Resolution and Replication. [Doctoral Dissertation]. Virginia Tech; 2011. Available from: http://hdl.handle.net/10919/29571

Virginia Tech
27.
Khan, Mohammed Saquib Akmal.
Efficient Spatio-Temporal Network Analytics in Epidemiological Studies using Distributed Databases.
Degree: MS, Computer Science and Applications, 2015, Virginia Tech
URL: http://hdl.handle.net/10919/51223
► Real-time Spatio-Temporal Analytics has become an integral part of Epidemiological studies. The size of the spatio-temporal data has been increasing tremendously over the years, gradually…
(more)
▼ Real-time Spatio-Temporal Analytics has become an integral part of Epidemiological studies. The size of the spatio-temporal data has been increasing tremendously over the years, gradually evolving into Big Data. The processing in such domains are highly data and compute intensive. High performance computing resources resources are actively being used to handle such workloads over massive datasets. This confluence of High performance computing and datasets with Big Data characteristics poses great challenges pertaining to data handling and processing. The resource management of supercomputers is in conflict with the data-intensive nature of spatio-temporal analytics. This is further exacerbated due to the fact that the data management is decoupled from the computing resources. Problems of these nature has provided great opportunities in the growth and development of tools and concepts centered around MapReduce based solutions. However, we believe that advanced relational concepts can still be employed to provide an effective solution to handle these issues and challenges.
In this study, we explore distributed databases to efficiently handle spatio-temporal Big Data for epidemiological studies. We propose DiceX (Data Intensive Computational Epidemiology using supercomputers), which couples high-performance, Big Data and relational computing by embedding distributed data storage and processing engines within the supercomputer. It is characterized by scalable strategies for data ingestion, unified framework to setup and configure various processing engines, along with the ability to pause, materialize and restore images of a data session. In addition, we have successfully configured DiceX to support approximation algorithms from MADlib Analytics Library [54], primarily Count-Min Sketch or CM Sketch [33][34][35].
DiceX enables a new style of Big Data processing, which is centered around the use of clustered databases and exploits supercomputing resources. It can effectively exploit the cores, memory and compute nodes of supercomputers to scale processing of spatio-temporal queries on datasets of large volume. Thus, it provides a scalable and efficient tool for data management and processing of spatio-temporal data. Although DiceX has been designed for computational epidemiology, it can be easily extended to different data-intensive domains facing similar issues and challenges.
We thank our external collaborators and members of the Network Dynamics and Simulation Science Laboratory (NDSSL) for their suggestions and comments. This work has been partially supported by DTRA CNIMS Contract HDTRA1-11-D-0016-0001, DTRA Validation Grant HDTRA1-11-1-0016, NSF - Network Science and Engineering Grant CNS-1011769, NIH and NIGMS - Models of Infectious Disease Agent Study Grant 5U01GM070694-11.
Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.
S. Government.
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Vullikanti, Anil Kumar S. (committee member), Prakash, Bodicherla Aditya (committee member), Gupta, Sandeep (committee member).
Subjects/Keywords: Data Analytics; Data Mining; Distributed Systems; Database Systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Khan, M. S. A. (2015). Efficient Spatio-Temporal Network Analytics in Epidemiological Studies using Distributed Databases. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/51223
Chicago Manual of Style (16th Edition):
Khan, Mohammed Saquib Akmal. “Efficient Spatio-Temporal Network Analytics in Epidemiological Studies using Distributed Databases.” 2015. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/51223.
MLA Handbook (7th Edition):
Khan, Mohammed Saquib Akmal. “Efficient Spatio-Temporal Network Analytics in Epidemiological Studies using Distributed Databases.” 2015. Web. 22 Jan 2021.
Vancouver:
Khan MSA. Efficient Spatio-Temporal Network Analytics in Epidemiological Studies using Distributed Databases. [Internet] [Masters thesis]. Virginia Tech; 2015. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/51223.
Council of Science Editors:
Khan MSA. Efficient Spatio-Temporal Network Analytics in Epidemiological Studies using Distributed Databases. [Masters Thesis]. Virginia Tech; 2015. Available from: http://hdl.handle.net/10919/51223

Virginia Tech
28.
Banerjee, Sharmi.
Computational Approaches to Predict Effect of Epigenetic Modifications on Transcriptional Regulation of Gene Expression.
Degree: PhD, Electrical Engineering, 2019, Virginia Tech
URL: http://hdl.handle.net/10919/94393
► A cell is the basic unit of any living organism. Cells contain nucleus that contains DNA, self replicating material often called the blueprint of life.…
(more)
▼ A cell is the basic unit of any living organism. Cells contain nucleus that contains DNA, self replicating material often called the blueprint of life. For sustenance of life, cells must respond to changes in our environment. Gene expression regulation, a process where specific regions of the DNA (genes) are copied into messenger RNA (mRNA) molecules and then translated into proteins, determines the fate of a cell. It is known that various environmental (such as diet, stress, social interaction) and biological factors often indirectly affect gene expression regulation. In this dissertation, we use machine learning approaches to predict how certain biological factors interfere indirectly with gene expression by changing specific properties of DNA. We expect our findings will help in understanding the interplay of these factors on gene expression.
Advisors/Committee Members: Tokekar, Pratap (committeechair), Wu, Xiaowei (committeechair), Baumann, William T. (committee member), Kim, Inyoung (committee member), Vullikanti, Anil Kumar S. (committee member).
Subjects/Keywords: Epigenetic factors; gene expression; transcription factors; histone marks; DNA
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Banerjee, S. (2019). Computational Approaches to Predict Effect of Epigenetic Modifications on Transcriptional Regulation of Gene Expression. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/94393
Chicago Manual of Style (16th Edition):
Banerjee, Sharmi. “Computational Approaches to Predict Effect of Epigenetic Modifications on Transcriptional Regulation of Gene Expression.” 2019. Doctoral Dissertation, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/94393.
MLA Handbook (7th Edition):
Banerjee, Sharmi. “Computational Approaches to Predict Effect of Epigenetic Modifications on Transcriptional Regulation of Gene Expression.” 2019. Web. 22 Jan 2021.
Vancouver:
Banerjee S. Computational Approaches to Predict Effect of Epigenetic Modifications on Transcriptional Regulation of Gene Expression. [Internet] [Doctoral dissertation]. Virginia Tech; 2019. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/94393.
Council of Science Editors:
Banerjee S. Computational Approaches to Predict Effect of Epigenetic Modifications on Transcriptional Regulation of Gene Expression. [Doctoral Dissertation]. Virginia Tech; 2019. Available from: http://hdl.handle.net/10919/94393

Virginia Tech
29.
Swaminathan, Anand.
An Algorithm for Influence Maximization and Target Set Selection for the Deterministic Linear Threshold Model.
Degree: MS, Computer Science and Applications, 2014, Virginia Tech
URL: http://hdl.handle.net/10919/49381
► The problem of influence maximization has been studied extensively with applications that include viral marketing, recommendations, and feed ranking. The optimization problem, first formulated by…
(more)
▼ The problem of influence maximization has been studied extensively with applications that include viral marketing, recommendations, and feed ranking. The optimization problem, first formulated by Kempe, Kleinberg and Tardos, is known to be NP-hard. Thus, several heuristics have been proposed to solve this problem. This thesis studies the problem of influence maximization under the deterministic linear threshold model and presents a novel heuristic for finding influential nodes in a graph with the goal of maximizing contagion spread that emanates from these influential nodes. Inputs to our algorithm include edge weights and vertex thresholds. The threshold difference greedy algorithm presented in this thesis takes into account both the edge weights as well as vertex thresholds in computing influence of a node. The threshold difference greedy algorithm is evaluated on 14 real-world networks. Results demonstrate that the new algorithm performs consistently better than the seven other heuristics that we evaluated in terms of final spread size. The threshold difference greedy algorithm has tuneable parameters which can make the algorithm run faster. As a part of the approach, the algorithm also computes the infected nodes in the graph. This eliminates the need for running simulations to determine the spread size from the influential nodes. We also study the target set selection problem with our algorithm. In this problem, the final spread size is specified and a seed (or influential) set is computed that will generate the required spread size.
Advisors/Committee Members: Marathe, Madhav Vishnu (committeechair), Bisset, Keith R. (committee member), Vullikanti, Anil Kumar S. (committee member), Kuhlman, Christopher James (committee member).
Subjects/Keywords: Influence maximization; complex contagion; linear threshold
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Swaminathan, A. (2014). An Algorithm for Influence Maximization and Target Set Selection for the Deterministic Linear Threshold Model. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/49381
Chicago Manual of Style (16th Edition):
Swaminathan, Anand. “An Algorithm for Influence Maximization and Target Set Selection for the Deterministic Linear Threshold Model.” 2014. Masters Thesis, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/49381.
MLA Handbook (7th Edition):
Swaminathan, Anand. “An Algorithm for Influence Maximization and Target Set Selection for the Deterministic Linear Threshold Model.” 2014. Web. 22 Jan 2021.
Vancouver:
Swaminathan A. An Algorithm for Influence Maximization and Target Set Selection for the Deterministic Linear Threshold Model. [Internet] [Masters thesis]. Virginia Tech; 2014. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/49381.
Council of Science Editors:
Swaminathan A. An Algorithm for Influence Maximization and Target Set Selection for the Deterministic Linear Threshold Model. [Masters Thesis]. Virginia Tech; 2014. Available from: http://hdl.handle.net/10919/49381
30.
Shao, Huijuan.
Temporal Mining Approaches for Smart Buildings Research.
Degree: PhD, Computer Science and Applications, 2017, Virginia Tech
URL: http://hdl.handle.net/10919/84349
► With the advent of modern sensor technologies, significant opportunities have opened up to help conserve energy in residential and commercial buildings. Moreover, the rapid urbanization…
(more)
▼ With the advent of modern sensor technologies, significant opportunities have opened up to help
conserve energy in residential and commercial buildings. Moreover, the rapid urbanization we are
witnessing requires optimized energy distribution. This dissertation focuses on two sub-problems
in improving energy conservation; energy disaggregation and occupancy prediction. Energy disaggregation
attempts to separate the energy usage of each circuit or each electric device in a building
using only aggregate electricity usage information from the meter for the whole house. The second
problem of occupancy prediction can be accomplished using non-invasive indoor activity tracking
to predict the locations of people inside a building. We cast both problems as temporal mining
problems. We exploit motif mining with constraints to distinguish devices with multiple states,
which helps tackle the energy disaggregation problem. Our results reveal that motif mining is
adept at distinguishing devices with multiple power levels and at disentangling the combinatorial
operation of devices. For the second problem we propose time-gap constrained episode mining
to detect activity patterns followed by the use of a mixture of episode generating HMM (EGH)
models to predict home occupancy. Finally, we demonstrate that the mixture EGH model can also
help predict the location of a person to address non-invasive indoor activities tracking.
Advisors/Committee Members: Ramakrishnan, Naren (committeechair), Lu, Chang Tien (committee member), Vullikanti, Anil Kumar S. (committee member), Marwah, Manish (committee member), Prakash, Bodicherla Aditya (committee member).
Subjects/Keywords: Data mining; Sustainability; Energy disaggregation; Occupancy prediction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shao, H. (2017). Temporal Mining Approaches for Smart Buildings Research. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/84349
Chicago Manual of Style (16th Edition):
Shao, Huijuan. “Temporal Mining Approaches for Smart Buildings Research.” 2017. Doctoral Dissertation, Virginia Tech. Accessed January 22, 2021.
http://hdl.handle.net/10919/84349.
MLA Handbook (7th Edition):
Shao, Huijuan. “Temporal Mining Approaches for Smart Buildings Research.” 2017. Web. 22 Jan 2021.
Vancouver:
Shao H. Temporal Mining Approaches for Smart Buildings Research. [Internet] [Doctoral dissertation]. Virginia Tech; 2017. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10919/84349.
Council of Science Editors:
Shao H. Temporal Mining Approaches for Smart Buildings Research. [Doctoral Dissertation]. Virginia Tech; 2017. Available from: http://hdl.handle.net/10919/84349
◁ [1] [2] [3] ▶
.