You searched for subject:(distributed computing)
.
Showing records 1 – 30 of
919 total matches.
◁ [1] [2] [3] [4] [5] … [31] ▶
1.
Chung, Hyun-Chul.
Information Infrastructures in Distributed Environments: Algorithms for Mobile Networks and Resource Allocation.
Degree: 2013, Texas Digital Library
URL: http://hdl.handle.net/1969;
http://hdl.handle.net/2249.1/66628
► A distributed system is a collection of computing entities that communicate with each other to solve some problem. Distributed systems impact almost every aspect of…
(more)
▼ A
distributed system is a collection of
computing entities that communicate with each other to solve some problem.
Distributed systems impact almost every aspect of daily life (e.g., cellular networks and the Internet); however, it is hard to develop services on top of
distributed systems due to the unreliable nature of
computing entities and communication. As handheld devices with wireless communication capabilities become increasingly popular, the task of providing services becomes even more challenging since dynamics, such as mobility, may cause the network topology to change frequently. One way to ease this task is to develop collections of information infrastructures which can serve as building blocks to design more complicated services and can be analyzed independently.
The first part of the dissertation considers the dining philosophers problem (a generalization of the mutual exclusion problem) in static networks. A solution to the dining philosophers problem can be utilized when there is a need to prevent multiple nodes from accessing some shared resource simultaneously. We present two algorithms that solve the dining philosophers problem. The first algorithm considers an asynchronous message-passing model while the second one considers an asynchronous shared-memory model. Both algorithms are crash fault-tolerant in the sense that a node crash only affects its local neighborhood in the network. We utilize failure detectors (system services that provide some information about crash failures in the system) to achieve such crash fault-tolerance. In addition to crash fault-tolerance, the first algorithm provides fairness in accessing shared resources and the second algorithm tolerates transient failures (unexpected corruptions to the system state). Considering the message-passing model, we also provide a reduction such that given a crash fault-tolerant solution to our dining philosophers problem, we implement the failure detector that we have utilized to solve our dining philosophers problem. This reduction serves as the first step towards identifying the minimum information regarding crash failures that is required to solve the dining philosophers problem at hand.
In the second part of this dissertation, we present information infrastructures for mobile ad hoc networks. In particular, we present solutions to the following problems in mobile ad hoc environments: (1) maintaining neighbor knowledge, (2) neighbor detection, and (3) leader election. The solutions to (1) and (3) consider a system with perfectly synchronized clocks while the solution to (2) considers a system with bounded clock drift. Services such as neighbor detection and maintaining neighbor knowledge can serve as a building block for applications that require point-to-point communication. A solution to the leader election problem can be used whenever there is a need for a unique coordinator in the system to perform a special task.
Advisors/Committee Members: Welch, Jennifer L (advisor).
Subjects/Keywords: Distributed Computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chung, H. (2013). Information Infrastructures in Distributed Environments: Algorithms for Mobile Networks and Resource Allocation. (Thesis). Texas Digital Library. Retrieved from http://hdl.handle.net/1969; http://hdl.handle.net/2249.1/66628
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Chung, Hyun-Chul. “Information Infrastructures in Distributed Environments: Algorithms for Mobile Networks and Resource Allocation.” 2013. Thesis, Texas Digital Library. Accessed January 21, 2021.
http://hdl.handle.net/1969; http://hdl.handle.net/2249.1/66628.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Chung, Hyun-Chul. “Information Infrastructures in Distributed Environments: Algorithms for Mobile Networks and Resource Allocation.” 2013. Web. 21 Jan 2021.
Vancouver:
Chung H. Information Infrastructures in Distributed Environments: Algorithms for Mobile Networks and Resource Allocation. [Internet] [Thesis]. Texas Digital Library; 2013. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/1969; http://hdl.handle.net/2249.1/66628.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Chung H. Information Infrastructures in Distributed Environments: Algorithms for Mobile Networks and Resource Allocation. [Thesis]. Texas Digital Library; 2013. Available from: http://hdl.handle.net/1969; http://hdl.handle.net/2249.1/66628
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Penn State University
2.
Jain, Aman.
SplitServe: Efficiently splitting complex workloads over IaaS and FaaS.
Degree: 2019, Penn State University
URL: https://submit-etda.libraries.psu.edu/catalog/16826axj182
► Serverless computing products such as AWS Lambdas and other ``Cloud Functions'' (CFs) can offer much lower startup latencies than Virtual Machine (VM) instances with lower…
(more)
▼ Serverless
computing products such as AWS Lambdas and other ``Cloud Functions'' (CFs) can offer much lower startup latencies than Virtual Machine (VM) instances with lower minimum cost. This has made them an attractive candidate for autoscaling to handle unpredictable spikes in simple, mostly stateless workloads both from a performance and a cost point of view. For complex (stateful, I/O intensive) and latency-critical workloads, however, the efficacy of using CFs in combination with VMs has not been fully explored.
In this paper, we motivate a ``split-service'' application framework that can, for a given job (workload), {\it simultaneously} exploit both infrastructure-as-a-service (VM) and function-as-a-service (CF) products. Specifically, we design and implement a SplitServe-Spark embodiment of our proposal by modifying Apache Spark to use both Amazon VMs and Lambdas. Rather than letting performance degrade following the arrival of such jobs, we show that SplitServe-Spark is able to effectively use CFs to start servicing them immediately while new VMs are being launched, thus reducing the overall execution time. Further, when the new VMs do become available, SplitServe-Spark is able to move ongoing work from Lambdas to the new VMs, if that is deemed desirable from the cost or performance perspectives.
Our experimental evaluation of SplitServe-Spark using four different workloads (K-means clustering, PageRank, TPC-DS, and Pi) shows that SplitServe-Spark improves performance up to 55% for workloads with small to modest amount of shuffling and up to 31% in workloads with large amounts of shuffling, when compared to only VM based autoscaling. Furthermore, SplitServe-Spark along with novel segueing techniques can help us save up to 21% of cost by still giving almost 40% improvement in execution time.
Advisors/Committee Members: Bhuvan Urgaonkar, Thesis Advisor/Co-Advisor, George Kesidis, Committee Member.
Subjects/Keywords: Public Cloud Computing; Distributed Computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jain, A. (2019). SplitServe: Efficiently splitting complex workloads over IaaS and FaaS. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/16826axj182
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Jain, Aman. “SplitServe: Efficiently splitting complex workloads over IaaS and FaaS.” 2019. Thesis, Penn State University. Accessed January 21, 2021.
https://submit-etda.libraries.psu.edu/catalog/16826axj182.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Jain, Aman. “SplitServe: Efficiently splitting complex workloads over IaaS and FaaS.” 2019. Web. 21 Jan 2021.
Vancouver:
Jain A. SplitServe: Efficiently splitting complex workloads over IaaS and FaaS. [Internet] [Thesis]. Penn State University; 2019. [cited 2021 Jan 21].
Available from: https://submit-etda.libraries.psu.edu/catalog/16826axj182.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Jain A. SplitServe: Efficiently splitting complex workloads over IaaS and FaaS. [Thesis]. Penn State University; 2019. Available from: https://submit-etda.libraries.psu.edu/catalog/16826axj182
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Minnesota
3.
Vuggumudi, Viswanadh Kumar Reddy.
A MPI-based Distributed Computation for Supporting Optimization of Urban Designs with QUIC EnvSim.
Degree: MS, Computer Science, 2015, University of Minnesota
URL: http://hdl.handle.net/11299/174731
► In the present day of urbanization, rise in urban infrastructure is causing an increase in air temperatures and pollution concentrations. This leads to an increase…
(more)
▼ In the present day of urbanization, rise in urban infrastructure is causing an increase in air temperatures and pollution concentrations. This leads to an increase in the energy required to cool buildings and more focused efforts to mitigate pollution. An effective way to mitigate these problems is by carefully designing cityscapes i.e., by placing the buildings, vegetation optimally and choosing energy efficient building materials. Researchers have been building computational models to understand the effects of urban infrastructure on microclimate. Simulating these models is a computationally expensive task. QUIC EnvSim (QES) is a dynamic, scalable and high performance framework that has provided a platform for building and simulating these models. QUIC EnvSim uses Graphics Processing Units (GPUs) to run each individual simulation faster than previous simulation codes. Though each individual simulation takes a short time, it is often required to perform large numbers of simulations and it can take a long time to complete them. This thesis introduces MPI QUIC, a scalable and extendable framework for running these simulations across a cluster of machines, effectively reducing the time required to run all simulations. Various tests on the framework have shown that the framework is capable of running large numbers of simulations in a relatively less amount of time. A test running 65536 simulation was performed. The estimated time for running the test on a single computer is approximately 11.37 days, with each simulation taking approximately 15 seconds to complete. The framework was able to finish running all the simulations in 19 hours, 0 minutes and 25 seconds showing a tremendous speed up of 92.5%. Thus urban planners can use this framework along with QUIC EnvSim to understand the effects of urban forms on microclimate and take informed design decision relatively quickly for building environment friendly urban landscapes. Besides providing a distributed computational environment, the other goal of the MPI QUIC project is to provide a user friendly interface for specifying optimization problems. The current work provides the ground work for the successors of the current work to provide a programmable interface for end users for specifying optimization problems. The framework is also designed so that future implementers can incorporate optimization algorithms that can optimize on multiple fitness functions.
Subjects/Keywords: Distributed computing; MPI
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Vuggumudi, V. K. R. (2015). A MPI-based Distributed Computation for Supporting Optimization of Urban Designs with QUIC EnvSim. (Masters Thesis). University of Minnesota. Retrieved from http://hdl.handle.net/11299/174731
Chicago Manual of Style (16th Edition):
Vuggumudi, Viswanadh Kumar Reddy. “A MPI-based Distributed Computation for Supporting Optimization of Urban Designs with QUIC EnvSim.” 2015. Masters Thesis, University of Minnesota. Accessed January 21, 2021.
http://hdl.handle.net/11299/174731.
MLA Handbook (7th Edition):
Vuggumudi, Viswanadh Kumar Reddy. “A MPI-based Distributed Computation for Supporting Optimization of Urban Designs with QUIC EnvSim.” 2015. Web. 21 Jan 2021.
Vancouver:
Vuggumudi VKR. A MPI-based Distributed Computation for Supporting Optimization of Urban Designs with QUIC EnvSim. [Internet] [Masters thesis]. University of Minnesota; 2015. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/11299/174731.
Council of Science Editors:
Vuggumudi VKR. A MPI-based Distributed Computation for Supporting Optimization of Urban Designs with QUIC EnvSim. [Masters Thesis]. University of Minnesota; 2015. Available from: http://hdl.handle.net/11299/174731

Delft University of Technology
4.
Leliveld, Dorus (author).
A Resiliency-First Approach to Distributed DAG Computations.
Degree: 2017, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:566b7b54-b40b-45fa-85c2-debaaa8098d4
► A framework is introduced for computations with transformations on immutable data. Inspiration is taken from Apache Spark, however the model of computation is generalized from…
(more)
▼ A framework is introduced for computations with transformations on immutable data. Inspiration is taken from Apache Spark, however the model of computation is generalized from an emphasis on narrow and wide dependencies, to an arbitrary set of transformations that form a directed acyclic graph (DAG). A distributed scheduling algorithm is developed with resiliency mechanisms that can account for stopping failure. Furthermore some properties of the system are derived. Finally future work is discussed showing there is fertile ground for further research and development to extend this work.
Computer Engineering
Advisors/Committee Members: Hofstee, Peter (mentor), Delft University of Technology (degree granting institution).
Subjects/Keywords: Distributed Computing; Resiliency; Distributed Scheduling; Load Balancing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Leliveld, D. (. (2017). A Resiliency-First Approach to Distributed DAG Computations. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:566b7b54-b40b-45fa-85c2-debaaa8098d4
Chicago Manual of Style (16th Edition):
Leliveld, Dorus (author). “A Resiliency-First Approach to Distributed DAG Computations.” 2017. Masters Thesis, Delft University of Technology. Accessed January 21, 2021.
http://resolver.tudelft.nl/uuid:566b7b54-b40b-45fa-85c2-debaaa8098d4.
MLA Handbook (7th Edition):
Leliveld, Dorus (author). “A Resiliency-First Approach to Distributed DAG Computations.” 2017. Web. 21 Jan 2021.
Vancouver:
Leliveld D(. A Resiliency-First Approach to Distributed DAG Computations. [Internet] [Masters thesis]. Delft University of Technology; 2017. [cited 2021 Jan 21].
Available from: http://resolver.tudelft.nl/uuid:566b7b54-b40b-45fa-85c2-debaaa8098d4.
Council of Science Editors:
Leliveld D(. A Resiliency-First Approach to Distributed DAG Computations. [Masters Thesis]. Delft University of Technology; 2017. Available from: http://resolver.tudelft.nl/uuid:566b7b54-b40b-45fa-85c2-debaaa8098d4

Purdue University
5.
Kambatla, Karthik Shashank.
Methods to Improve Applicability and Efficiency of Distributed Data-Centric Compute Frameworks.
Degree: PhD, Computer Science, 2016, Purdue University
URL: https://docs.lib.purdue.edu/open_access_dissertations/1379
► The success of modern applications depends on the insights they collect from their data repositories. Data repositories for such applications currently exceed exabytes and are…
(more)
▼ The success of modern applications depends on the insights they collect from their data repositories. Data repositories for such applications currently exceed exabytes and are rapidly increasing in size, as they collect data from varied sources - web applications, mobile phones, sensors and other connected devices.
Distributed storage and data-centric compute frameworks have been invented to store and analyze these large datasets. This dissertation focuses on extending the applicability and improving the efficiency of
distributed data-centric compute frameworks.
Advisors/Committee Members: Ananth Y Grama, Dongyan Xu, Sonia Fahmy, Mathias Payer, Aniket Kate.
Subjects/Keywords: big data; distributed computing; distributed systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kambatla, K. S. (2016). Methods to Improve Applicability and Efficiency of Distributed Data-Centric Compute Frameworks. (Doctoral Dissertation). Purdue University. Retrieved from https://docs.lib.purdue.edu/open_access_dissertations/1379
Chicago Manual of Style (16th Edition):
Kambatla, Karthik Shashank. “Methods to Improve Applicability and Efficiency of Distributed Data-Centric Compute Frameworks.” 2016. Doctoral Dissertation, Purdue University. Accessed January 21, 2021.
https://docs.lib.purdue.edu/open_access_dissertations/1379.
MLA Handbook (7th Edition):
Kambatla, Karthik Shashank. “Methods to Improve Applicability and Efficiency of Distributed Data-Centric Compute Frameworks.” 2016. Web. 21 Jan 2021.
Vancouver:
Kambatla KS. Methods to Improve Applicability and Efficiency of Distributed Data-Centric Compute Frameworks. [Internet] [Doctoral dissertation]. Purdue University; 2016. [cited 2021 Jan 21].
Available from: https://docs.lib.purdue.edu/open_access_dissertations/1379.
Council of Science Editors:
Kambatla KS. Methods to Improve Applicability and Efficiency of Distributed Data-Centric Compute Frameworks. [Doctoral Dissertation]. Purdue University; 2016. Available from: https://docs.lib.purdue.edu/open_access_dissertations/1379

NSYSU
6.
Ke, Hung-i.
A multi-agent-based distributed computing environment for bioinformatics applications.
Degree: Master, Information Management, 2009, NSYSU
URL: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727109-114814
► The process of bioinformatics computing consumes huge computing resources, in situation of difficulty in improvement of algorithm and high cost of mainframe, many scholars choice…
(more)
▼ The process of bioinformatics
computing consumes huge
computing resources, in
situation of difficulty in improvement of algorithm and high cost of mainframe, many
scholars choice
distributed computing as an approach for reducing
computing time. When
using
distributed computing for bioinformatics, how to find a properly tasks allocation
strategy among different
computing nodes to keep load-balancing is an important issue. By
adopting multi-agent system as a tool, system developer can design tasks allocation strategies
through intuitional view and keep load-balancing among
computing nodes.
The purpose of our research work is using multi-agent system as an underlying tool to
develop a
distributed computing environment and assist scholars in solving bioinformatics
computing problem, In comparison with public
computing projects such as BOINC, our
research work focuses on utilizing
computing nodes deployed inside organization and
connected by local area network.
Advisors/Committee Members: Chia-Mei Chen (chair), Wei-Po Lee (committee member), Bing-Chiang Jeng (chair).
Subjects/Keywords: distributed computing; agent; bioinformatics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ke, H. (2009). A multi-agent-based distributed computing environment for bioinformatics applications. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727109-114814
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ke, Hung-i. “A multi-agent-based distributed computing environment for bioinformatics applications.” 2009. Thesis, NSYSU. Accessed January 21, 2021.
http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727109-114814.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ke, Hung-i. “A multi-agent-based distributed computing environment for bioinformatics applications.” 2009. Web. 21 Jan 2021.
Vancouver:
Ke H. A multi-agent-based distributed computing environment for bioinformatics applications. [Internet] [Thesis]. NSYSU; 2009. [cited 2021 Jan 21].
Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727109-114814.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ke H. A multi-agent-based distributed computing environment for bioinformatics applications. [Thesis]. NSYSU; 2009. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727109-114814
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Adelaide
7.
Kelly, Peter M.
Applying functional programming theory to the design of workflow engines.
Degree: 2011, University of Adelaide
URL: http://hdl.handle.net/2440/65938
► Workflow languages are a form of high-level programming language designed for coordinating tasks implemented by different pieces of soft-ware, often executed across multiple computers using…
(more)
▼ Workflow languages are a form of high-level programming language designed for coordinating tasks implemented by different pieces of soft-ware, often executed across multiple computers using technologies such as web services. Advantages of workflow languages include automatic parallelisation, built-in support for accessing services, and simple programming models that abstract away many of the complexities associated with
distributed and parallel programming. In this thesis, we focus on data-oriented workflow languages, in which all computation is free of side effects. Despite their advantages, existing work-flow languages sacrifice support for internal computation and data manipulation, in an attempt to provide programming models that are simple to understand and contain implicit parallelism. These limitations inconvenience users, who are forced to define additional components in separate scripting languages whenever they need to implement programming logic that cannot be expressed in the workflow language itself. In this thesis, we propose the use of functional programming as a model for data-oriented workflow languages. Functional programming languages are both highly expressive and amenable to automatic parallelisation. Our approach combines the coordination facilities of workflow languages with the computation facilities of functional programming languages, allowing both aspects of a workflow to be expressed in the one language. We have designed and implemented a small functional language called ELC, which extends lambda calculus with a minimal set of features necessary for practical implementation of workflows. ELC can either be used directly, or as a compilation target for other workflow languages. Using this approach, we developed a compiler for XQuery, extended with support for web services. XQuery’s native support for XML processing makes it well-suited for manipulating the XML data produced and consumed by web services. Both languages make it easy to develop complex workflows involving arbitrary computation and data manipulation. Our workflow engine, NReduce, uses parallel graph reduction to execute workflows. It supports both orchestration, where a central node coordinates all service invocation, and choreography, where coordination, scheduling, and data transfer are carried out in a decentralised manner across multiple nodes. The details of orchestration and choreography are abstracted away from the programmer by the workflow engine. In both cases, parallel invocation of services is managed in a completely automatic manner, without any explicit direction from the programmer. Our study includes an in-depth analysis of performance issues of relevance to our approach. This includes a discussion of performance problems encountered during our implementation work, and an explanation of the solutions we have devised to these. Such issues are likely to be of relevance to others developing workflow engines based on a similar model. We also benchmark our system using a range of workflows, demonstrating high levels…
Advisors/Committee Members: Coddington, Paul David (advisor), School of Computer Science (school).
Subjects/Keywords: functional programming; workflow; distributed computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kelly, P. M. (2011). Applying functional programming theory to the design of workflow engines. (Thesis). University of Adelaide. Retrieved from http://hdl.handle.net/2440/65938
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Kelly, Peter M. “Applying functional programming theory to the design of workflow engines.” 2011. Thesis, University of Adelaide. Accessed January 21, 2021.
http://hdl.handle.net/2440/65938.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Kelly, Peter M. “Applying functional programming theory to the design of workflow engines.” 2011. Web. 21 Jan 2021.
Vancouver:
Kelly PM. Applying functional programming theory to the design of workflow engines. [Internet] [Thesis]. University of Adelaide; 2011. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/2440/65938.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Kelly PM. Applying functional programming theory to the design of workflow engines. [Thesis]. University of Adelaide; 2011. Available from: http://hdl.handle.net/2440/65938
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Northeastern University
8.
Li, Xiangyu.
Exploiting large-scale data analytics platforms with accelerator hardware.
Degree: PhD, Department of Electrical and Computer Engineering, 2018, Northeastern University
URL: http://hdl.handle.net/2047/D20294268
► The volume of data being generated today across multiple application domains including scientific exploration, web search, e-commerce and medical research, has continued to grow unbounded.…
(more)
▼ The volume of data being generated today across multiple application domains including scientific exploration, web search, e-commerce and medical research, has continued to grow unbounded. The value of leveraging machine learning to analyze big data has led to the growth in popularity of high-level distributed computing frameworks such as Apache Hadoop and Spark. These frameworks have significantly improved the programmability of distributed systems to accelerate big data analysis, whose workload is typically beyond the processing and storage capabilities of a single machine.; GPUs have been shown to provide an effective path to accelerate machine learning tasks. These devices offer high memory bandwidth and thousands of parallel cores which can deliver up to an order of magnitude better performance for machine learning applications as compared to multi-core CPUs.; While both distributed systems and GPUs have been shown to independently provide benefits when processing machine learning tasks on big data, developing an integrated framework that can exploit the parallelism provided GPUs, while maintaining an easy-to-use programming interface, has not been aggressively explored.; In this thesis, we explore the seamless integration of GPUs with Hadoop and Spark to achieve performance and scalability, while preserving their flexible programming interfaces. We propose techniques that expose GPU details for fine-tuned kernels in a Java/Scala-based distributed computing environment, reduce JVM overhead, and increase on/off heap memory efficiency. We use a set of representative machine learning data analytics applications to test our approach and achieve promising performance improvements compared to Hadoop/Spark's multi-core CPU implementations.
Subjects/Keywords: distributed computing; GPU; machine learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, X. (2018). Exploiting large-scale data analytics platforms with accelerator hardware. (Doctoral Dissertation). Northeastern University. Retrieved from http://hdl.handle.net/2047/D20294268
Chicago Manual of Style (16th Edition):
Li, Xiangyu. “Exploiting large-scale data analytics platforms with accelerator hardware.” 2018. Doctoral Dissertation, Northeastern University. Accessed January 21, 2021.
http://hdl.handle.net/2047/D20294268.
MLA Handbook (7th Edition):
Li, Xiangyu. “Exploiting large-scale data analytics platforms with accelerator hardware.” 2018. Web. 21 Jan 2021.
Vancouver:
Li X. Exploiting large-scale data analytics platforms with accelerator hardware. [Internet] [Doctoral dissertation]. Northeastern University; 2018. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/2047/D20294268.
Council of Science Editors:
Li X. Exploiting large-scale data analytics platforms with accelerator hardware. [Doctoral Dissertation]. Northeastern University; 2018. Available from: http://hdl.handle.net/2047/D20294268

University of Manitoba
9.
Brajczuk, Dale A.
Mining frequent sequences in one database scan using distributed computers.
Degree: Computer Science, 2011, University of Manitoba
URL: http://hdl.handle.net/1993/4814
► Existing frequent-sequence mining algorithms perform multiple scans of a database, or a structure that captures the database. In this M.Sc. thesis, I propose a frequent-sequence…
(more)
▼ Existing frequent-sequence mining algorithms perform multiple scans of a database, or a structure that captures the database. In this M.Sc. thesis, I propose a frequent-sequence mining algorithm that mines each database row as it reads it, so that it can potentially complete mining in the time it takes to read the database once. I achieve this by having my algorithm enumerate all sub-sequences from each row as it reads it.
Since sub-sequence enumeration is a time-consuming process, I create a method to distribute the work over multiple computers, processors, and thread units, while balancing the load between all resources, and limiting the amount of communication so that my algorithm scales well in regards to the number of computers used. Experimental results show that my algorithm is effective, and can potentially complete the mining process in near the time it takes to perform one scan of the input database.
Advisors/Committee Members: Leung, Carson K. (Computer Science) (supervisor), Irani, Pourang (Computer Science) Rajapakse, Athula (Electrical & Computer Engineering) (examiningcommittee).
Subjects/Keywords: data mining; databases; distributed computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Brajczuk, D. A. (2011). Mining frequent sequences in one database scan using distributed computers. (Masters Thesis). University of Manitoba. Retrieved from http://hdl.handle.net/1993/4814
Chicago Manual of Style (16th Edition):
Brajczuk, Dale A. “Mining frequent sequences in one database scan using distributed computers.” 2011. Masters Thesis, University of Manitoba. Accessed January 21, 2021.
http://hdl.handle.net/1993/4814.
MLA Handbook (7th Edition):
Brajczuk, Dale A. “Mining frequent sequences in one database scan using distributed computers.” 2011. Web. 21 Jan 2021.
Vancouver:
Brajczuk DA. Mining frequent sequences in one database scan using distributed computers. [Internet] [Masters thesis]. University of Manitoba; 2011. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/1993/4814.
Council of Science Editors:
Brajczuk DA. Mining frequent sequences in one database scan using distributed computers. [Masters Thesis]. University of Manitoba; 2011. Available from: http://hdl.handle.net/1993/4814

Victoria University of Wellington
10.
John, Koshy.
The Social Cloud for Public eResearch.
Degree: 2012, Victoria University of Wellington
URL: http://hdl.handle.net/10063/2187
► Scientific researchers faced with extremely large computations or the requirement of storing vast quantities of data have come to rely on distributed computational models like…
(more)
▼ Scientific researchers faced with extremely large computations or the requirement
of storing vast quantities of data have come to rely on
distributed
computational models like grid and cloud
computing. However,
distributed computation is typically complex and expensive. The Social
Cloud for Public eResearch aims to provide researchers with a platform
to exploit social networks to reach out to users who would otherwise be
unlikely to donate computational time for scientific and other research oriented
projects. This thesis explores the motivations of users to contribute
computational time and examines the various ways these motivations can
be catered to through established social networks. We specifically look
at integrating Facebook and BOINC, and discuss the architecture of the
functional system and the novel social engineering algorithms that power it.
Advisors/Committee Members: Bubendorfer, Kris.
Subjects/Keywords: Social networks; Distributed computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
John, K. (2012). The Social Cloud for Public eResearch. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/2187
Chicago Manual of Style (16th Edition):
John, Koshy. “The Social Cloud for Public eResearch.” 2012. Masters Thesis, Victoria University of Wellington. Accessed January 21, 2021.
http://hdl.handle.net/10063/2187.
MLA Handbook (7th Edition):
John, Koshy. “The Social Cloud for Public eResearch.” 2012. Web. 21 Jan 2021.
Vancouver:
John K. The Social Cloud for Public eResearch. [Internet] [Masters thesis]. Victoria University of Wellington; 2012. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/10063/2187.
Council of Science Editors:
John K. The Social Cloud for Public eResearch. [Masters Thesis]. Victoria University of Wellington; 2012. Available from: http://hdl.handle.net/10063/2187

University of Notre Dame
11.
Badi' Abdul-Wahid.
A Software Pipeline for Ensemble Molecular
Dynamics</h1>.
Degree: Computer Science and Engineering, 2015, University of Notre Dame
URL: https://curate.nd.edu/show/5x21td98g9t
► Proteins are the “machines of life” and understanding their motions is crucial to understanding the nature of various diseases, for instance Alzheimer’s and Huntington’s.…
(more)
▼ Proteins are the “machines of life” and
understanding their motions is crucial to understanding the nature
of various diseases, for instance Alzheimer’s and Huntington’s.
Molecular Dynamics simulations are useful since they provide a
atom-level resolution of a protein’s motion. Of particular interest
are measuring the rates at which different molecular conformations
interchange. A significant bottleneck in the use of molecular
dynamics is the difference in timescales. As a result, simulations
require billions of steps to access slow motions such as protein
folding. This work describes the development of a software pipeline
to address this issue. The result is a set of software packages
aimed at facilitating specific points is the study of pro- teins:
exploratory simulations, conformational sampling, and rate
calculations. Due to the large number of simulations that need to
be run, each step supports the use of
distributed systems and
supports long-running applications and fault-tolerance. problem
addressed is that current approaches suffer from systematic bias
and my contribution is an implementation of a method that does not
suffer this bias, as well as a
distributed computing
pipeline.
Advisors/Committee Members: Jesus Izaguirre, Committee Chair, Collin McMillan, Committee Member, Christopher Sweet, Committee Member, Douglas Thain, Committee Member.
Subjects/Keywords: molecular dynamics; distributed computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Abdul-Wahid, B. (2015). A Software Pipeline for Ensemble Molecular
Dynamics</h1>. (Thesis). University of Notre Dame. Retrieved from https://curate.nd.edu/show/5x21td98g9t
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Abdul-Wahid, Badi'. “A Software Pipeline for Ensemble Molecular
Dynamics</h1>.” 2015. Thesis, University of Notre Dame. Accessed January 21, 2021.
https://curate.nd.edu/show/5x21td98g9t.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Abdul-Wahid, Badi'. “A Software Pipeline for Ensemble Molecular
Dynamics</h1>.” 2015. Web. 21 Jan 2021.
Vancouver:
Abdul-Wahid B. A Software Pipeline for Ensemble Molecular
Dynamics</h1>. [Internet] [Thesis]. University of Notre Dame; 2015. [cited 2021 Jan 21].
Available from: https://curate.nd.edu/show/5x21td98g9t.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Abdul-Wahid B. A Software Pipeline for Ensemble Molecular
Dynamics</h1>. [Thesis]. University of Notre Dame; 2015. Available from: https://curate.nd.edu/show/5x21td98g9t
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Minnesota
12.
Padhye, Vinit A.
Transaction and data consistency models for cloud applications.
Degree: PhD, Computer Science, 2014, University of Minnesota
URL: http://hdl.handle.net/11299/163018
► The emergence of cloud computing and large-scale Internet services has given rise to new classes of data management systems, commonly referred to as NoSQL systems.…
(more)
▼ The emergence of cloud computing and large-scale Internet services has given rise to new classes of data management systems, commonly referred to as NoSQL systems. The NoSQL systems provide high scalability and availability, however they provide only limited form of transaction support and weak consistency models. There are many applications that require more useful transaction and data consistency models than those currently provided by the NoSQL systems. In this thesis, we address the problem of providing scalable transaction support and appropriate consistency models for cluster based as well as geo-replicated NoSQL systems. The models we develop in this thesis are founded upon the snapshot isolation (SI) model which has been recognized as attractive for scalability. In supporting transactions on cluster-based NoSQL systems, we introduce a notion of decoupled transaction management in which transaction management functions are decoupled from storage system and integrated with the application layer. We present two system architectures based on this concept. In the first system architecture all transaction management functions are executed in a fully decentralized manner by the application processes. The second architecture is based on a hybrid approach in which the conflict detection functions are performed by a dedicated service. Because the SI model can lead to non-serializable transaction executions, we investigate two approaches for ensuring serializability. We perform a comparative evaluation of the two architectures and approaches for guaranteeing serializability and demonstrate their scalability. For transaction management in geo-replicated systems, we propose an SI based transaction model, referred to as causal snapshot isolation (CSI), which provides causal consistency using asynchronous replication. The causal consistency model provides more useful consistency guarantees than the eventual consistency model. We build upon the CSI model to provide an efficient transaction model for partially replicated databases, addressing the unique challenges raised due to partial replication in supporting snapshot isolation and causal consistency. Through experimental evaluations, we demonstrate the scalability and performance of our mechanisms.
Subjects/Keywords: Cloud Computing; Database; Distributed Systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Padhye, V. A. (2014). Transaction and data consistency models for cloud applications. (Doctoral Dissertation). University of Minnesota. Retrieved from http://hdl.handle.net/11299/163018
Chicago Manual of Style (16th Edition):
Padhye, Vinit A. “Transaction and data consistency models for cloud applications.” 2014. Doctoral Dissertation, University of Minnesota. Accessed January 21, 2021.
http://hdl.handle.net/11299/163018.
MLA Handbook (7th Edition):
Padhye, Vinit A. “Transaction and data consistency models for cloud applications.” 2014. Web. 21 Jan 2021.
Vancouver:
Padhye VA. Transaction and data consistency models for cloud applications. [Internet] [Doctoral dissertation]. University of Minnesota; 2014. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/11299/163018.
Council of Science Editors:
Padhye VA. Transaction and data consistency models for cloud applications. [Doctoral Dissertation]. University of Minnesota; 2014. Available from: http://hdl.handle.net/11299/163018

Louisiana State University
13.
Pokhrel, Ayam.
Distributed Iterative Graph Processing Using NoSQL with Data Locality.
Degree: MS, Other Computer Engineering, 2018, Louisiana State University
URL: https://digitalcommons.lsu.edu/gradschool_theses/4710
► A tremendous amount of data is generated every day from a wide range of sources such as social networks, sensors, and application logs. Among…
(more)
▼ A tremendous amount of data is generated every day from a wide range of sources such as social networks, sensors, and application logs. Among them, graph data is one type that represents valuable relationships between various entities. Analytics of large graphs has become an essential part of business processes and scientific studies because it leads to deep and meaningful insights into the related domain based on the connections between various entities. However, the optimal processing of large-scale iterative graph computations is very challenging due to the issues like fault tolerance, high memory requirement, parallelization, and scalability. Most of the contemporary systems focus either on keeping the entire graph data in memory and minimizing the disk access or on processing the graph data completely on a single node with a centralized disk system. GraphMap is one of the state-of-the-art scalable and efficient out-of-core disk-based iterative graph processing systems that focus on using the secondary storage and optimizing the I/O access. In this thesis, we investigate two new extensions to the existing out-of-core NoSQL-based distributed iterative graph processing system: 1) Intra-worker data locality and 2) Mincut-based partitioning. We design an additional suite of data locality that moves the computation towards the data rather than the other way around. A significant improvement in performance, up to 39%, is demonstrated by this locality implementation. Similarly, we use the mincut-based graph partitioning technique to distribute the graph data uniformly across the workers for parallelization so that the inter-worker communication volume is minimized. By extensive experiments, we also show that the mincut-based graph partitioning technique can lead to improper parallelization due to sub-optimal load-balancing.
Subjects/Keywords: graph data processing; distributed computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Pokhrel, A. (2018). Distributed Iterative Graph Processing Using NoSQL with Data Locality. (Masters Thesis). Louisiana State University. Retrieved from https://digitalcommons.lsu.edu/gradschool_theses/4710
Chicago Manual of Style (16th Edition):
Pokhrel, Ayam. “Distributed Iterative Graph Processing Using NoSQL with Data Locality.” 2018. Masters Thesis, Louisiana State University. Accessed January 21, 2021.
https://digitalcommons.lsu.edu/gradschool_theses/4710.
MLA Handbook (7th Edition):
Pokhrel, Ayam. “Distributed Iterative Graph Processing Using NoSQL with Data Locality.” 2018. Web. 21 Jan 2021.
Vancouver:
Pokhrel A. Distributed Iterative Graph Processing Using NoSQL with Data Locality. [Internet] [Masters thesis]. Louisiana State University; 2018. [cited 2021 Jan 21].
Available from: https://digitalcommons.lsu.edu/gradschool_theses/4710.
Council of Science Editors:
Pokhrel A. Distributed Iterative Graph Processing Using NoSQL with Data Locality. [Masters Thesis]. Louisiana State University; 2018. Available from: https://digitalcommons.lsu.edu/gradschool_theses/4710

Anna University
14.
Jayasudha A R.
Efficient scheduling algorithms for Mapping resources
using differential Evolution for grid computing
Environment;.
Degree: Efficient scheduling algorithms for Mapping resources
using differential Evolution for grid computing
Environment, 2015, Anna University
URL: http://shodhganga.inflibnet.ac.in/handle/10603/35495
► The last few decades have witnessed a remarkable advancement newlinein the field of Distributed Computing Grid Computing is a form of distributed newlinecomputing It is…
(more)
▼ The last few decades have witnessed a remarkable
advancement newlinein the field of Distributed Computing Grid
Computing is a form of distributed newlinecomputing It is a wide
scale infrastructure that supports sharing of newlineresources and
plays a greater role in coordinating problem solving It is a
newlinecollection of resources that involves computational
capacities and storage newlineresources owned and administered by
many different organizations newlineProblems persisting in many
fields that could not be solved newlineusing limited resources can
be solved with resources in grid computing It newlineprovides
dependable consistent pervasive and inexpensive access to
newlinehigh end computational capabilities It aggregates the
resources of newlinemultiple computers in a network for a single
problem at the same time for newlinea scientific or technical one
that requires a large number of computer newlineprocessing cycles
or access to large amounts of data newlineRecently many methods
proposed for grid scheduling have newlinegained attention for
achieving high throughput However it results in newlinehigh
computation cost and consumes more time In order to overcome
newlinethe above stated issues a grid scheduling algorithm is
required that newlinecould reduce these overheads and could enhance
the throughput newline newline newline
reference p103-114.
Advisors/Committee Members: Purusothaman T.
Subjects/Keywords: Distributed Computing Grid Computing; Science and humanities
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
R, J. A. (2015). Efficient scheduling algorithms for Mapping resources
using differential Evolution for grid computing
Environment;. (Thesis). Anna University. Retrieved from http://shodhganga.inflibnet.ac.in/handle/10603/35495
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
R, Jayasudha A. “Efficient scheduling algorithms for Mapping resources
using differential Evolution for grid computing
Environment;.” 2015. Thesis, Anna University. Accessed January 21, 2021.
http://shodhganga.inflibnet.ac.in/handle/10603/35495.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
R, Jayasudha A. “Efficient scheduling algorithms for Mapping resources
using differential Evolution for grid computing
Environment;.” 2015. Web. 21 Jan 2021.
Vancouver:
R JA. Efficient scheduling algorithms for Mapping resources
using differential Evolution for grid computing
Environment;. [Internet] [Thesis]. Anna University; 2015. [cited 2021 Jan 21].
Available from: http://shodhganga.inflibnet.ac.in/handle/10603/35495.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
R JA. Efficient scheduling algorithms for Mapping resources
using differential Evolution for grid computing
Environment;. [Thesis]. Anna University; 2015. Available from: http://shodhganga.inflibnet.ac.in/handle/10603/35495
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Notre Dame
15.
Chao Zheng.
The Challenges of Scaling Up High-Throughput Workflow with
Container Technology</h1>.
Degree: Computer Science and Engineering, 2019, University of Notre Dame
URL: https://curate.nd.edu/show/g158bg28c5m
► High-throughput computing (HTC) is about using a large amount of computing resources over a long time to accomplish many independent and parallel computational tasks.…
(more)
▼ High-throughput
computing (HTC) is about
using a large amount of
computing resources over a long time to
accomplish many independent and parallel computational tasks. HTC
workloads are often described in the form of workflow and run on
distributed systems through workflow systems. However, as most
workflow systems are not liable for managing the task execution
environment, HTC workflows are regularly limited in dedicated HTC
facilities that have required settings. Lately,
container runtimes have been widely deployed across public cloud
because of its ability to deliver execution environment with lower
overheads than the virtual machine. This trend provides users of
HTC workflows an opportunity to use unlimited
computing power on
the cloud. However, migrating complex workflow systems to a
container environment is cumbersome. To
containerize HTC workflows and scale them up on the cloud, I
synthesize my experiences on using container technologies and
develop a methodology that contains seven design factors: i)
Isolation Granularity – the granularity of isolation should be
determined by characteristics for target workloads; ii) Container
Management – container runtimes must be adapted to the
distributed
environment, and the under-layer
distributed systems best does the
management of containers; iii) Im- age Management – a cooperated
mechanism can help to speed up and improve the efficiency of image
distribution in
distributed environment; iv) Garbage Collection –
timely garbage collection is necessary given the massive amount of
intermediate data generated by the HTC workflow; v) Network
Connection – excessive network connections should be avoided
considering the plenty of small transmissions; vi) Resource
Management – customized resource management mechanisms that fully
consider the characteristics of the target workflow are required;
vii) Cross-layer Cooperation – implementation of advanced features
requires cooperation between the upper-layer workflow system and
the under-layer cluster manager. In addition to
HTC workflows, I validate the above factors through my work of
standardizing resource provisioning process for extreme scale
online workloads, and observe that they are equally applicable to
the HTC workflow as well as the extreme scale online workload.
Advisors/Committee Members: Douglas L. Thain, Research Director, Christian Poellabauer, Committee Member, Dong Wang, Committee Member, Lukas Rupprecht, Committee Member.
Subjects/Keywords: High-Throughput Computing; Cloud Computing; Distributed System
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zheng, C. (2019). The Challenges of Scaling Up High-Throughput Workflow with
Container Technology</h1>. (Thesis). University of Notre Dame. Retrieved from https://curate.nd.edu/show/g158bg28c5m
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Zheng, Chao. “The Challenges of Scaling Up High-Throughput Workflow with
Container Technology</h1>.” 2019. Thesis, University of Notre Dame. Accessed January 21, 2021.
https://curate.nd.edu/show/g158bg28c5m.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Zheng, Chao. “The Challenges of Scaling Up High-Throughput Workflow with
Container Technology</h1>.” 2019. Web. 21 Jan 2021.
Vancouver:
Zheng C. The Challenges of Scaling Up High-Throughput Workflow with
Container Technology</h1>. [Internet] [Thesis]. University of Notre Dame; 2019. [cited 2021 Jan 21].
Available from: https://curate.nd.edu/show/g158bg28c5m.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Zheng C. The Challenges of Scaling Up High-Throughput Workflow with
Container Technology</h1>. [Thesis]. University of Notre Dame; 2019. Available from: https://curate.nd.edu/show/g158bg28c5m
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
16.
Heintz, Benjamin.
Optimizing Timeliness, Accuracy, and Cost in Geo-Distributed Data-Intensive Computing Systems.
Degree: PhD, Computer Science, 2016, University of Minnesota
URL: http://hdl.handle.net/11299/185171
► Big Data touches every aspect of our lives, from the way we spend our free time to the way we make scientific discoveries. Netflix streamed…
(more)
▼ Big Data touches every aspect of our lives, from the way we spend our free time to the way we make scientific discoveries. Netflix streamed more than 42 billion hours of video in 2015, and in the process recorded massive volumes of data to inform video recommendations and plan investments in new content. The CERN Large Hadron Collider produces enough data to fill more than one billion DVDs every week, and this data has led to the discovery of the Higgs boson particle. Such large scale computing is challenging because no one machine is capable of ingesting, storing, or processing all of the data. Instead, applications require distributed systems comprising many machines working in concert. Adding to the challenge, many data streams originate from geographically distributed sources. Scientific sensors such as LIGO span multiple sites and generate data too massive to process at any one location. The machines that analyze these data are also geo-distributed; for example Netflix and Facebook users span the globe, and so do the machines used to analyze their behavior. Many applications need to process geo-distributed data on geo-distributed systems with low latency. A key challenge in achieving this requirement is determining where to carry out the computation. For applications that process unbounded data streams, two performance metrics are critical: WAN traffic and staleness (i.e., delay in receiving results). To optimize these metrics, a system must determine when to communicate results from distributed resources to a central data warehouse. As an additional challenge, constrained WAN bandwidth often renders exact computation infeasible. Fortunately, many applications can tolerate inaccuracy, albeit with diverse preferences. To support diverse applications, systems must determine what partial results to communicate in order to achieve the desired staleness-error tradeoff. This thesis presents answers to these three questions – where to compute, when to communicate, and what partial results to communicate – in two contexts: batch computing, where the complete input data set is available prior to computation; and stream computing, where input data are continuously generated. We also explore the challenges facing emerging programming models and execution engines that unify stream and batch computing.
Subjects/Keywords: Data-intensive Computing; Distributed Systems; Geo-distributed; Stream computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Heintz, B. (2016). Optimizing Timeliness, Accuracy, and Cost in Geo-Distributed Data-Intensive Computing Systems. (Doctoral Dissertation). University of Minnesota. Retrieved from http://hdl.handle.net/11299/185171
Chicago Manual of Style (16th Edition):
Heintz, Benjamin. “Optimizing Timeliness, Accuracy, and Cost in Geo-Distributed Data-Intensive Computing Systems.” 2016. Doctoral Dissertation, University of Minnesota. Accessed January 21, 2021.
http://hdl.handle.net/11299/185171.
MLA Handbook (7th Edition):
Heintz, Benjamin. “Optimizing Timeliness, Accuracy, and Cost in Geo-Distributed Data-Intensive Computing Systems.” 2016. Web. 21 Jan 2021.
Vancouver:
Heintz B. Optimizing Timeliness, Accuracy, and Cost in Geo-Distributed Data-Intensive Computing Systems. [Internet] [Doctoral dissertation]. University of Minnesota; 2016. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/11299/185171.
Council of Science Editors:
Heintz B. Optimizing Timeliness, Accuracy, and Cost in Geo-Distributed Data-Intensive Computing Systems. [Doctoral Dissertation]. University of Minnesota; 2016. Available from: http://hdl.handle.net/11299/185171

University of Tasmania
17.
Atkinson, AK.
Tupleware: a distributed tuple space for the development and execution of array-based applications in a cluster computing environment.
Degree: 2010, University of Tasmania
URL: https://eprints.utas.edu.au/9996/1/Alistair_Atkinson_PhD_Thesis.pdf
► This thesis describes Tupleware, an implementation of a distributed tuple space which acts as a scalable and efficient cluster middleware for computationally intensive numerical and…
(more)
▼ This thesis describes Tupleware, an implementation of a distributed tuple space which acts as a scalable and efficient cluster middleware for computationally intensive numerical and scientific applications. Tupleware is based on the Linda coordination language (Gelernter 1985), and incorporates additional techniques such as peer-to-peer communications and exploitation of data locality in order to address problems such as scalability and performance, which are commonly encountered by traditional centralised tuple space implementations.
Tupleware is implemented in such as way that, whilepr ocessing is taking place, all communication between cluster nodes is decentralised in a peer-to-peer fashion. Communication events are initiated by a node requesting a tuple which is located on a remote node, and in order to make tuple retrieval as efficient as possible, a tuple
search algorithm is used to minimise the number of communication instances required to retrieve a remote tuple. This algorithm is based on the locality of a remote
tuple and the success of previous remote tuple requests. As Tupleware is targetted at numerical applications which generally involve the partitioning and processing of 1-D or 2-D arrays, the locality of a remote tuple can generally be determined as being located on one of a small number nodes which are processing neighbouring partitions of the array.
Furthermore, unlike some other distributed tuple space implementations, Tupleware does not burden the programmer with any additional complexity due to this distribution. At the application level, the Tupleware middleware behaves exactly like a centralised tuple space, and provides much greater flexibility with regards to where components of a system are executed.
The design and implementation of Tupleware is described, and placed in the context of other distributed tuple space implementations, along with the specific requirements of the applications that the system caters for. Finally, Tupleware is evaluated using several numerical and/or scientific applications, which show it to provide a sufficient level of scalability for a broad range tasks.
The main contribution of this work is the identification of techniques which enable a tuple space to be efficiently and transparently distributed across the nodes in a cluster. Central to this is the use of an algorithm for tuple retrieval which minimises the number of communication instances which occur during system execution. Distribution transparency is ensured by the provision of a simple interface to the underlying system, so that the distributed tuple space appears to the programmer as a single unified resource.
It is hoped that this research in some way furthers the adoption of the tuple space programming model for distributed computing, by enhancing its ability to
provide improved performance, scalability, flexibility and simplicity for a range of applications not traditionally suited to tuple space based systems.
Subjects/Keywords: Distributed computing; parallel computing; concurrency; high-performance computing; tuple space.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Atkinson, A. (2010). Tupleware: a distributed tuple space for the development and execution of array-based applications in a cluster computing environment. (Thesis). University of Tasmania. Retrieved from https://eprints.utas.edu.au/9996/1/Alistair_Atkinson_PhD_Thesis.pdf
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Atkinson, AK. “Tupleware: a distributed tuple space for the development and execution of array-based applications in a cluster computing environment.” 2010. Thesis, University of Tasmania. Accessed January 21, 2021.
https://eprints.utas.edu.au/9996/1/Alistair_Atkinson_PhD_Thesis.pdf.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Atkinson, AK. “Tupleware: a distributed tuple space for the development and execution of array-based applications in a cluster computing environment.” 2010. Web. 21 Jan 2021.
Vancouver:
Atkinson A. Tupleware: a distributed tuple space for the development and execution of array-based applications in a cluster computing environment. [Internet] [Thesis]. University of Tasmania; 2010. [cited 2021 Jan 21].
Available from: https://eprints.utas.edu.au/9996/1/Alistair_Atkinson_PhD_Thesis.pdf.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Atkinson A. Tupleware: a distributed tuple space for the development and execution of array-based applications in a cluster computing environment. [Thesis]. University of Tasmania; 2010. Available from: https://eprints.utas.edu.au/9996/1/Alistair_Atkinson_PhD_Thesis.pdf
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Brunel University
18.
Hansen, Jarle.
An investigation of smartphone applications : exploring usability aspects related to wireless personal area networks, context-awareness, and remote information access.
Degree: PhD, 2012, Brunel University
URL: http://bura.brunel.ac.uk/handle/2438/6518
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557768
► In this thesis we look into usability in the context of smartphone applications. We selected three research areas to investigate, namely Wireless Personal Area Networks,…
(more)
▼ In this thesis we look into usability in the context of smartphone applications. We selected three research areas to investigate, namely Wireless Personal Area Networks, Context-awareness, and Remote Information Access. These areas are investigated through a series of experiments, which focuses on important aspects of usability within software applications. Additionally, we mainly use smartphone devices in the experiments. In experiment 1, Multi-Platform Bluetooth Remote Control, we investigated Wireless Personal Area Networks. Specifically, we implemented a system consisting of two clients, which were created for Java ME and Windows Mobile, and integrated with a server application installed on a Bluetooth-enabled laptop. For experiments 2 and 3, Context-aware Meeting Room and PainDroid: an Android Application for Pain Management, we looked closely at the research area of Contextawareness. The Context-aware Meeting Room was created to automatically send meeting participants useful meeting notes during presentations. In experiment 3, we investigated the use of on-device sensors for the Android platform, providing an additional input mechanism for a pain management application, where the accelerometer and magnetometer were used. Finally, the last research area we investigated was Remote Information Access, where we conducted experiment 4, Customised Android Home Screen. We created a system that integrated both a cloud-based server application and a mobile client running on the Android platform. We used the cloud-computing platform to provide context management features, such as the ability to store the user configuration that was automatically pushed to the mobile devices.
Subjects/Keywords: 004.167; Mobile computing; Pervasive computing; Smartphone applications; Usability; Distributed computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hansen, J. (2012). An investigation of smartphone applications : exploring usability aspects related to wireless personal area networks, context-awareness, and remote information access. (Doctoral Dissertation). Brunel University. Retrieved from http://bura.brunel.ac.uk/handle/2438/6518 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557768
Chicago Manual of Style (16th Edition):
Hansen, Jarle. “An investigation of smartphone applications : exploring usability aspects related to wireless personal area networks, context-awareness, and remote information access.” 2012. Doctoral Dissertation, Brunel University. Accessed January 21, 2021.
http://bura.brunel.ac.uk/handle/2438/6518 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557768.
MLA Handbook (7th Edition):
Hansen, Jarle. “An investigation of smartphone applications : exploring usability aspects related to wireless personal area networks, context-awareness, and remote information access.” 2012. Web. 21 Jan 2021.
Vancouver:
Hansen J. An investigation of smartphone applications : exploring usability aspects related to wireless personal area networks, context-awareness, and remote information access. [Internet] [Doctoral dissertation]. Brunel University; 2012. [cited 2021 Jan 21].
Available from: http://bura.brunel.ac.uk/handle/2438/6518 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557768.
Council of Science Editors:
Hansen J. An investigation of smartphone applications : exploring usability aspects related to wireless personal area networks, context-awareness, and remote information access. [Doctoral Dissertation]. Brunel University; 2012. Available from: http://bura.brunel.ac.uk/handle/2438/6518 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557768

Delft University of Technology
19.
van Eyk, Erwin (author).
The Design, Productization, and Evaluation of a Serverless Workflow-Management System.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:af1407a8-6141-446c-828b-f3a4f5bf5786
► The need for accessible and cost-effective IT resources has led to the near-universal adoption of cloud computing. Within cloud computing, serverless computing has emerged as…
(more)
▼ The need for accessible and cost-effective IT resources has led to the near-universal adoption of cloud computing. Within cloud computing, serverless computing has emerged as a model that further abstracts away operational complexity of heterogeneous cloud resources. Central to this form of computing is Function-as-a-Service (FaaS); a cloud model that enables users to express applications as functions, further decoupling the application logic from the hardware and other operational concerns. Although FaaS has seen rapid adoption for simple use cases, there are several issues that impede its use for more complex use cases. A key issue is the lack of systems that facilitate the reuse of existing functions to create more complex, composed functions. Current approaches for serverless function composition are either proprietary, resource inefficient, unreliable, or do not scale. To address this issue, we propose an approach to orchestrate composed functions using reliably and efficiently with workflows. As a prototype, we design and implement Fission Workflows: an open-source serverless workflow system which leverages the characteristics of serverless functions to improve the (re)usability, performance, and reliability of function compositions. We evaluate our prototype using both synthetic and real-world experiments, which show that the system is comparable with or better than state-of-the-art workflow systems, while costing significantly less. Based on the experimental evaluation and the industry interest in the Fission Workflows product, we believe that serverless workflow orchestration will enable the use of serverless applications for more complex use cases.
Computer Science
Advisors/Committee Members: Iosup, Alexandru (mentor), Delft University of Technology (degree granting institution).
Subjects/Keywords: Serverless Computing; function-as-a-service; Distributed Computing; Cloud Computing; Workflow
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
van Eyk, E. (. (2019). The Design, Productization, and Evaluation of a Serverless Workflow-Management System. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:af1407a8-6141-446c-828b-f3a4f5bf5786
Chicago Manual of Style (16th Edition):
van Eyk, Erwin (author). “The Design, Productization, and Evaluation of a Serverless Workflow-Management System.” 2019. Masters Thesis, Delft University of Technology. Accessed January 21, 2021.
http://resolver.tudelft.nl/uuid:af1407a8-6141-446c-828b-f3a4f5bf5786.
MLA Handbook (7th Edition):
van Eyk, Erwin (author). “The Design, Productization, and Evaluation of a Serverless Workflow-Management System.” 2019. Web. 21 Jan 2021.
Vancouver:
van Eyk E(. The Design, Productization, and Evaluation of a Serverless Workflow-Management System. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Jan 21].
Available from: http://resolver.tudelft.nl/uuid:af1407a8-6141-446c-828b-f3a4f5bf5786.
Council of Science Editors:
van Eyk E(. The Design, Productization, and Evaluation of a Serverless Workflow-Management System. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:af1407a8-6141-446c-828b-f3a4f5bf5786
20.
Fesl, Jan.
Virtual Distributed Computing Systems and Their Applications
.
Degree: 2017, Czech University of Technology
URL: http://hdl.handle.net/10467/73643
► This dissertation thesis deals with the architecture, benchmarking, optimization and implementation of virtual distributed computing systems. Large distributed systems rep- resenting the performance background of…
(more)
▼ This dissertation thesis deals with the architecture, benchmarking, optimization and
implementation of virtual
distributed computing systems. Large
distributed systems rep-
resenting the performance background of all modern cloud
computing architectures have
become a very hot topic at present. This dissertation thesis o ers an introduction into
modern technologies showing a rapid impact on the system performance. One of them is
virtualization technology, whose real e ciency was a standalone part of the research. Large
distributed systems consume a huge amount of electric power, therefore their optimization
was also discussed. New ideas originated from the research were incorporated into the pro-
posal of a new
distributed system called Centr aln mozek univerzity (CMU). This system
is able to manage and automatically optimize large virtualization infrastructures. It is also
accessible to the end-users. This system is currently used in teaching many subjects and
became a prototype of an educational system of new generation.
In particular, the main contributions of the dissertation thesis are as follows:
1. Methodology, design and implementation of a new software benchmark utility able
to measure virtualization e ciency of the
distributed architecture.
2. New approach in a possible migration of the entire
distributed systems between
various data centres.
3. New
distributed algorithm for virtual machines consolidation, rapidly reducing the
energy consumption.
4. Design, description and implementation of the
distributed virtualization system CMU.; Diserta cn pr ace se zaob r a architekturami, testov an m, optimalizac a implementac
virtu aln ch distribuovan ych syst em u. Rozs ahl e distribuovan e syst emy, kter e stoj v pozad
v sech modern ch cloudov ych architektur, jsou v sou casn e dob e skute cn e zhav ym t ematem.
Pr ace obsahuje teoretick y uvod do modern ch virtualiza cn ch technologi , kter e rapidn e
ovliv nuj v ypo cetn v ykon. Efektivita virtualiza cn ch technologi byla jedn m z kl cov ych
t emat vlastn ho v yzkumu. Rozs ahl e v ypo cetn infrastruktury spot rebov avaj obrovsk e
mno zstv elektrick e energie, p ri cem z optimalizace spot reby byla dal s m p redm etem v yzkumu.
Inovativn my sleny, kter e vznikly ve f azi v yzkumu, byly integrov any do n avrhu a im-
plementace nov eho distribuovan eho virtualiza cn ho syst emu, kter y nese n azev Centr aln
mozek univerzity (CMU). Tento syst em je schopen pln e automatizovan e spravovat a op-
timalizovat distribuovan e infrastruktury pro virtualizaci. Syst em dok a ze zprost redkovat
p r stup pro vyu zit sv ych v ypo cetn ch prost redk u koncov ym u zivatel um. Takov ato kon-
cepce syst emu je sou casn e prototypem v yukov eho syst emu nov e generace, ve kter em je
vyu zita architektura datov ych center pro v yuku univerzitn ch p redm et u.
Advisors/Committee Members: Janeček, Jan (advisor).
Subjects/Keywords: virtualization;
distributed computing systems;
cloud computing;
live migration;
green computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Fesl, J. (2017). Virtual Distributed Computing Systems and Their Applications
. (Thesis). Czech University of Technology. Retrieved from http://hdl.handle.net/10467/73643
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Fesl, Jan. “Virtual Distributed Computing Systems and Their Applications
.” 2017. Thesis, Czech University of Technology. Accessed January 21, 2021.
http://hdl.handle.net/10467/73643.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Fesl, Jan. “Virtual Distributed Computing Systems and Their Applications
.” 2017. Web. 21 Jan 2021.
Vancouver:
Fesl J. Virtual Distributed Computing Systems and Their Applications
. [Internet] [Thesis]. Czech University of Technology; 2017. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/10467/73643.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Fesl J. Virtual Distributed Computing Systems and Their Applications
. [Thesis]. Czech University of Technology; 2017. Available from: http://hdl.handle.net/10467/73643
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Virginia Tech
21.
Datla, Dinesh.
Wireless Distributed Computing in Cloud Computing Networks.
Degree: PhD, Electrical Engineering, 2013, Virginia Tech
URL: http://hdl.handle.net/10919/51729
► The explosion in growth of smart wireless devices has increased the ubiquitous presence of computational resources and location-based data. This new reality of numerous wireless…
(more)
▼ The explosion in growth of smart wireless devices has increased the ubiquitous presence of computational resources and location-based data. This new reality of numerous wireless devices capable of collecting, sharing, and processing information, makes possible an avenue for new enhanced applications. Multiple radio nodes with diverse functionalities can form a wireless cloud
computing network (WCCN) and collaborate on executing complex applications using wireless
distributed computing (WDC). Such a dynamically composed virtual cloud environment can offer services and resources hosted by individual nodes for consumption by user applications. This dissertation proposes an architectural framework for WCCNs and presents the different phases of its development, namely, development of a mathematical system model of WCCNs, simulation analysis of the performance benefits offered by WCCNs, design of decision-making mechanisms in the architecture, and development of a prototype to validate the proposed architecture.
The dissertation presents a system model that captures power consumption, energy consumption, and latency experienced by computational and communication activities in a typical WCCN. In addition, it derives a stochastic model of the response time experienced by a user application when executed in a WCCN. Decision-making and resource allocation play a critical role in the proposed architecture. Two adaptive algorithms are presented, namely, a workload allocation algorithm and a task allocation - scheduling algorithm. The proposed algorithms are analyzed for power efficiency, energy efficiency, and improvement in the execution time of user applications that are achieved by workload distribution. Experimental results gathered from a software-defined radio network prototype of the proposed architecture validate the theoretical analysis and show that it is possible to achieve 80 % improvement in execution time with the help of just three nodes in the network.
Advisors/Committee Members: Bose, Tamal (committeechair), Reed, Jeffrey H. (committeechair), Park, Jung-Min (committee member), Marathe, Madhav Vishnu (committee member), MacKenzie, Allen B. (committee member).
Subjects/Keywords: distributed computing; wireless cloud computing; mobile cloud computing; wireless networks
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Datla, D. (2013). Wireless Distributed Computing in Cloud Computing Networks. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/51729
Chicago Manual of Style (16th Edition):
Datla, Dinesh. “Wireless Distributed Computing in Cloud Computing Networks.” 2013. Doctoral Dissertation, Virginia Tech. Accessed January 21, 2021.
http://hdl.handle.net/10919/51729.
MLA Handbook (7th Edition):
Datla, Dinesh. “Wireless Distributed Computing in Cloud Computing Networks.” 2013. Web. 21 Jan 2021.
Vancouver:
Datla D. Wireless Distributed Computing in Cloud Computing Networks. [Internet] [Doctoral dissertation]. Virginia Tech; 2013. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/10919/51729.
Council of Science Editors:
Datla D. Wireless Distributed Computing in Cloud Computing Networks. [Doctoral Dissertation]. Virginia Tech; 2013. Available from: http://hdl.handle.net/10919/51729

Virginia Tech
22.
Khalifa, Ahmed Abdelmonem Abuelfotooh Ali.
Collaborative Computing Cloud: Architecture and Management Platform.
Degree: PhD, Computer Engineering, 2015, Virginia Tech
URL: http://hdl.handle.net/10919/72866
► We are witnessing exponential growth in the number of powerful, multiply-connected, energy-rich stationary and mobile nodes, which will make available a massive pool of computing…
(more)
▼ We are witnessing exponential growth in the number of powerful, multiply-connected, energy-rich stationary and mobile nodes, which will make available a massive pool of
computing and communication resources. We claim that cloud
computing can provide resilient on-demand
computing, and more effective and efficient utilization of potentially infinite array of resources. Current cloud
computing systems are primarily built using stationary resources. Recently, principles of cloud
computing have been extended to the mobile
computing domain aiming to form local clouds using mobile devices sharing their
computing resources to run cloud-based services.
However, current cloud
computing systems by and large fail to provide true on-demand
computing due to their lack of the following capabilities: 1) providing resilience and autonomous adaptation to the real-time variation of the underlying dynamic and scattered resources as they join or leave the formed cloud; 2) decoupling cloud management from resource management, and hiding the heterogeneous resource capabilities of participant nodes; and 3) ensuring reputable resource providers and preserving the privacy and security constraints of these providers while allowing multiple users to share their resources. Consequently, systems and consumers are hindered from effectively and efficiently utilizing the virtually infinite pool of
computing resources.
We propose a platform for mobile cloud
computing that integrates: 1) a dynamic real-time resource scheduling, tracking, and forecasting mechanism; 2) an autonomous resource management system; and 3) a cloud management capability for cloud services that hides the heterogeneity, dynamicity, and geographical diversity concerns from the cloud operation. We hypothesize that this would enable 'Collaborative
Computing Cloud (C3)' for on-demand
computing, which is a dynamically formed cloud of stationary and/or mobile resources to provide ubiquitous
computing on-demand. The C3 would support a new resource-infinite
computing paradigm to expand problem solving beyond the confines of walled-in resources and services by utilizing the massive pool of
computing resources, in both stationary and mobile nodes.
In this dissertation, we present a C3 management platform, named PlanetCloud, for enabling both a new resource-infinite
computing paradigm using cloud
computing over stationary and mobile nodes, and a true ubiquitous on-demand cloud
computing. This has the potential to liberate cloud users from being concerned about resource constraints and provides access to cloud anytime and anywhere.
PlanetCloud synergistically manages 1) resources to include resource harvesting, forecasting and selection, and 2) cloud services concerned with resilient cloud services to include resource provider collaboration, application execution isolation from resource layer concerns, seamless load migration, fault-tolerance, the task deployment, migration, revocation, etc. Specifically, our main contributions in the context of PlanetCloud are as follows.
1. PlanetCloud…
Advisors/Committee Members: Hou, Yiwei Thomas (committeechair), Eltoweissy, Mohamed Youssef (committeechair), Riad, Sedki Mohamed (committee member), Silva, Luiz A. (committee member), Chen, Ing Ray (committee member), El-Nainay, Mustafa Yousry (committee member).
Subjects/Keywords: Cloud Computing; Mobile Computing; Collaborative Computing; On-Demand Computing; Distributed Resource Management; Virtualization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Khalifa, A. A. A. A. (2015). Collaborative Computing Cloud: Architecture and Management Platform. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/72866
Chicago Manual of Style (16th Edition):
Khalifa, Ahmed Abdelmonem Abuelfotooh Ali. “Collaborative Computing Cloud: Architecture and Management Platform.” 2015. Doctoral Dissertation, Virginia Tech. Accessed January 21, 2021.
http://hdl.handle.net/10919/72866.
MLA Handbook (7th Edition):
Khalifa, Ahmed Abdelmonem Abuelfotooh Ali. “Collaborative Computing Cloud: Architecture and Management Platform.” 2015. Web. 21 Jan 2021.
Vancouver:
Khalifa AAAA. Collaborative Computing Cloud: Architecture and Management Platform. [Internet] [Doctoral dissertation]. Virginia Tech; 2015. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/10919/72866.
Council of Science Editors:
Khalifa AAAA. Collaborative Computing Cloud: Architecture and Management Platform. [Doctoral Dissertation]. Virginia Tech; 2015. Available from: http://hdl.handle.net/10919/72866

NSYSU
23.
Kao, Chih-Hung.
Mobile Computing Architecture Based on Linux Kernel.
Degree: Master, Electrical Engineering, 2014, NSYSU
URL: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0725114-124137
► Recent cloud services from many vendors are booming in the data manipulation applica-tions. Due to the rapid development of wireless communication, cloud services can en-hance…
(more)
▼ Recent cloud services from many vendors are booming in the data manipulation applica-tions. Due to the rapid development of wireless communication, cloud services can en-hance the high level of bound and support more aspect.
In this paper, we present the design of a cloud service environment that can flexi-bly and generically provide the
distributed computing for mobile device. The architec-ture named as the GDCP (Generic
Distributed Computing Platform). The GDCP has two major features. The one, it provides the platform for the users having the generic computer with more
computing power to design their acceleration functions and reg-ister them at this platform for the application designers who need to design performance issued programs or mobile APPs. The others, the GDCP propose a
distributed compu-ting mechanism for the application designer to construct the performance issued applica-tions. For embedding the GDCP environments to the generic computer, we go deep survey how to design the platform engines in the Linux kernel, and how to manage the
computing resource according to the CPU usage.
GDCP provide a novel
distributed computing service for users. The design of GDCP can be joined by general computer, can reduce the cost of GDCP setting up. The environment for designing custom accelerated function and APPs can let the APPs de-velopers construct the performance issued APPs. By the GDCP, we can achieve the dis-tribution
computing generically and flexibly.
Advisors/Committee Members: Zi-Tsan Chou (chair), Yu-Liang Chou (chair), Jih-ching Chiu (committee member), Chu-sing Yang (chair).
Subjects/Keywords: cloud; distributed computing; driver; mobile; IaaS
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kao, C. (2014). Mobile Computing Architecture Based on Linux Kernel. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0725114-124137
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Kao, Chih-Hung. “Mobile Computing Architecture Based on Linux Kernel.” 2014. Thesis, NSYSU. Accessed January 21, 2021.
http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0725114-124137.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Kao, Chih-Hung. “Mobile Computing Architecture Based on Linux Kernel.” 2014. Web. 21 Jan 2021.
Vancouver:
Kao C. Mobile Computing Architecture Based on Linux Kernel. [Internet] [Thesis]. NSYSU; 2014. [cited 2021 Jan 21].
Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0725114-124137.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Kao C. Mobile Computing Architecture Based on Linux Kernel. [Thesis]. NSYSU; 2014. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0725114-124137
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

NSYSU
24.
Huang, Sheng-Yu.
Implementation of Distributed Computing Management by Kernel Programming.
Degree: Master, Electrical Engineering, 2015, NSYSU
URL: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0713115-112214
► The cloud services primary divided into two typesï¼data server and distributed computing architecture. The distributed computing can achieve rapid and reliable requirements of data processing…
(more)
▼ The cloud services primary divided into two typesï¼data server and
distributed computing architecture. The
distributed computing can achieve rapid and reliable requirements of data processing at a lower cost. In this paper, we propose a new
distributed computing management which used the kernel programming. The communication is composed of management server,
computing node and client. It can manage many
computing nodes effectively, to offer users a more flexible
distributed computing environment.
In order to realize the target of managing
computing node uniformly and processing data discretely, our architecture divided into three parts: Management server,
computing node and client. The management server is responsible for accepting and managing
computing node information. It should distribute
computing resources and help client wired to
computing node, if client send an operational request.
Computing node is based on the kernel of different operation system (Windows and Linux). Let idle resources used as a platform by the
computing nodes. This method can reduce the equipment cost. And we used the concept of MapReduce on client. Set the Map agent and Reduce agent to help disperse and collect data.
Finally, we prove our implementation of
distributed computing management through practical verification on operation system kernel of Windows 7/8 and Linux Ubuntu 13.10. It can integrate
computing nodes of different operation system, and solve the Hot-spot problem.
Advisors/Committee Members: Zi-Tsan Chou (chair), Tong-Yu Hsieh (chair), Jih-ching Chiu (committee member), Shiann-Rong Kuang (chair).
Subjects/Keywords: Distributed Computing; Network Protocol; Kernel; CUDA; Cloud
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Huang, S. (2015). Implementation of Distributed Computing Management by Kernel Programming. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0713115-112214
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Huang, Sheng-Yu. “Implementation of Distributed Computing Management by Kernel Programming.” 2015. Thesis, NSYSU. Accessed January 21, 2021.
http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0713115-112214.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Huang, Sheng-Yu. “Implementation of Distributed Computing Management by Kernel Programming.” 2015. Web. 21 Jan 2021.
Vancouver:
Huang S. Implementation of Distributed Computing Management by Kernel Programming. [Internet] [Thesis]. NSYSU; 2015. [cited 2021 Jan 21].
Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0713115-112214.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Huang S. Implementation of Distributed Computing Management by Kernel Programming. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0713115-112214
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
25.
Polyvyanyy, Artem.
Structuring process models.
Degree: phd, 2012, University of Potsdam
URL: https://eprints.qut.edu.au/103118/
► One can fairly adopt the ideas of Donald E. Knuth to conclude that process modeling is both a science and an art. Process modeling does…
(more)
▼ One can fairly adopt the ideas of Donald E. Knuth to conclude that process modeling is both a science and an art. Process modeling does have an aesthetic sense. Similar to composing an opera or writing a novel, process modeling is carried out by humans who undergo creative practices when engineering a process model. Therefore, the very same process can be modeled in a myriad number of ways. Once modeled, processes can be analyzed by employing scientific methods.
Usually, process models are formalized as directed graphs, with nodes representing tasks and decisions, and directed arcs describing temporal constraints between the nodes. Common process definition languages, such as <i>Business Process Model and Notation</i> (BPMN) and <i>Event-driven Process Chain</i> (EPC) allow process analysts to define models with arbitrary complex topologies. The absence of structural constraints supports creativity and productivity, as there is no need to force ideas into a limited amount of available structural patterns. Nevertheless, it is often preferable that models follow certain structural rules.
A well-known structural property of process models is (well-)structuredness. A process model is (well-)structured if and only if every node with multiple outgoing arcs (a split) has a corresponding node with multiple incoming arcs (a join), and vice versa, such that the set of nodes between the split and the join induces <i>a single-entry-single-exit</i> (SESE) region; otherwise the process model is unstructured. The motivations for well-structured process models are manifold:
(i) Well-structured process models are easier to layout for visual representation as their formalizations are planar graphs.
(ii) Well-structured process models are easier to comprehend by humans.
(iii) Well-structured process models tend to have fewer errors than unstructured ones and it is less probable to introduce new errors when modifying a well-structured process model.
(iv) Well-structured process models are better suited for analysis with many existing formal techniques applicable only for well-structured process models.
(v) Well-structured process models are better suited for efficient execution and optimization, e.g., when discovering independent regions of a process model that can be executed concurrently.
Consequently, there are process modeling languages that encourage well-structured modeling, e.g., <i>Business Process Execution Language</i> (BPEL) and ADEPT. However, the well-structured process modeling implies some limitations:
(i) There exist processes that cannot be formalized as well-structured process models.
(ii) There exist processes that when formalized as well-structured process models require a considerable duplication of modeling constructs.
Rather than expecting well-structured modeling from start, we advocate for the absence of structural constraints when modeling. Afterwards, automated methods can suggest, upon request and whenever possible, alternative formalizations that are…
Subjects/Keywords: DISTRIBUTED COMPUTING (080500); INFORMATION SYSTEMS (080600)
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Polyvyanyy, A. (2012). Structuring process models. (Doctoral Dissertation). University of Potsdam. Retrieved from https://eprints.qut.edu.au/103118/
Chicago Manual of Style (16th Edition):
Polyvyanyy, Artem. “Structuring process models.” 2012. Doctoral Dissertation, University of Potsdam. Accessed January 21, 2021.
https://eprints.qut.edu.au/103118/.
MLA Handbook (7th Edition):
Polyvyanyy, Artem. “Structuring process models.” 2012. Web. 21 Jan 2021.
Vancouver:
Polyvyanyy A. Structuring process models. [Internet] [Doctoral dissertation]. University of Potsdam; 2012. [cited 2021 Jan 21].
Available from: https://eprints.qut.edu.au/103118/.
Council of Science Editors:
Polyvyanyy A. Structuring process models. [Doctoral Dissertation]. University of Potsdam; 2012. Available from: https://eprints.qut.edu.au/103118/

University of Georgia
26.
Agarwal, Abhishek.
Merging parallel simulations.
Degree: 2014, University of Georgia
URL: http://hdl.handle.net/10724/22049
► I n earlier work cloning is proposed as a means for efficiently splitting a running simulation midway through its execution into multiple parallel simulations. In…
(more)
▼ I n earlier work cloning is proposed as a means for efficiently splitting a running simulation midway through its execution into multiple parallel simulations. In simulation cloning, clones usually are able to share computations that occur
early in the simulation, but as their states diverge individual logical processes (LP’s) are replicated as necessary so that their computations proceed independently. Over time the state of the clones (or their constituent LPs) may converge.
Traditionally, these converged LPs would continue to execute identical events. We address this inefficiency by merging of previously cloned LPs. We show that such merging can further increase efficiency beyond that obtained through cloning only. We
discuss our implementation of merging, and illustrate its effectiveness in several example simulation scenarios.
Subjects/Keywords: Parallel and Distributed Computing; Simulations; Cloning; Merging.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Agarwal, A. (2014). Merging parallel simulations. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/22049
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Agarwal, Abhishek. “Merging parallel simulations.” 2014. Thesis, University of Georgia. Accessed January 21, 2021.
http://hdl.handle.net/10724/22049.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Agarwal, Abhishek. “Merging parallel simulations.” 2014. Web. 21 Jan 2021.
Vancouver:
Agarwal A. Merging parallel simulations. [Internet] [Thesis]. University of Georgia; 2014. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/10724/22049.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Agarwal A. Merging parallel simulations. [Thesis]. University of Georgia; 2014. Available from: http://hdl.handle.net/10724/22049
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Newcastle
27.
Lynar, Timothy Michael.
Energy conservation in distributed heterogeneous computing environments using economic resource allocation mechanisms.
Degree: PhD, 2011, University of Newcastle
URL: http://hdl.handle.net/1959.13/923275
► Research Doctorate - Doctor of Philosophy (PhD)
This thesis examines the question: can economic resource allocation mechanisms be used in distributed computing environments to reduce…
(more)
▼ Research Doctorate - Doctor of Philosophy (PhD)
This thesis examines the question: can economic resource allocation mechanisms be used in distributed computing environments to reduce energy consumption whilst maintaining execution speed? This thesis investigates the use of several resource allocation mechanisms that take account of the power consumption and processing capacity of each available computing node within a distributed heterogeneous computing environment. Different economic resource allocation mechanisms have different attributes and allocate resources differently. The resource allocation mechanisms are evaluated to examine their effect on the time and energy required to process a workload of the sort that might be expected in a distributed computing system. Initial examination of the resource allocation mechanisms was conducted through the execution of artificial workloads on a simulated cluster. To further this research, a real cluster and grid environment was created from obsolete computers. An examination was undertaken of the use of obsolete computers in distributed computing environments and how the use of such systems may assist to mitigate electronic waste. The examination of resource allocation was continued on a cluster, and then on an institutional grid. The simulation model was then calibrated to the cluster and grid, which was then used to simulate the execution of real published grid workloads under each of the resource allocation mechanisms. The resource allocation mechanisms under consideration were found to have different characteristics that resulted in them being suited for different types of workload. It was also found that the choice of a resource allocation mechanism that takes account of the power consumption and performance of individual resources can make a significant difference, through leveraging the heterogeneous nature of resources, to the total system energy consumed and time taken in computing a workload.
Advisors/Committee Members: University of Newcastle. Faculty of Science and Information Technology, School of Design, Communication and Information Technology.
Subjects/Keywords: grids; resources; allocation; distributed; energy; conservation; computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lynar, T. M. (2011). Energy conservation in distributed heterogeneous computing environments using economic resource allocation mechanisms. (Doctoral Dissertation). University of Newcastle. Retrieved from http://hdl.handle.net/1959.13/923275
Chicago Manual of Style (16th Edition):
Lynar, Timothy Michael. “Energy conservation in distributed heterogeneous computing environments using economic resource allocation mechanisms.” 2011. Doctoral Dissertation, University of Newcastle. Accessed January 21, 2021.
http://hdl.handle.net/1959.13/923275.
MLA Handbook (7th Edition):
Lynar, Timothy Michael. “Energy conservation in distributed heterogeneous computing environments using economic resource allocation mechanisms.” 2011. Web. 21 Jan 2021.
Vancouver:
Lynar TM. Energy conservation in distributed heterogeneous computing environments using economic resource allocation mechanisms. [Internet] [Doctoral dissertation]. University of Newcastle; 2011. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/1959.13/923275.
Council of Science Editors:
Lynar TM. Energy conservation in distributed heterogeneous computing environments using economic resource allocation mechanisms. [Doctoral Dissertation]. University of Newcastle; 2011. Available from: http://hdl.handle.net/1959.13/923275

Iowa State University
28.
Tang, Li.
Algebraic approaches for coded caching and distributed computing.
Degree: 2020, Iowa State University
URL: https://lib.dr.iastate.edu/etd/17873
► This dissertation examines the power of algebraic methods in two areas of modern interest: caching for large scale content distribution and straggler mitigation within distributed…
(more)
▼ This dissertation examines the power of algebraic methods in two areas of modern interest: caching for large scale content distribution and straggler mitigation within distributed computation.
Caching is a popular technique for facilitating large scale content delivery over the Internet. Traditionally, caching operates by storing popular content closer to the end users. Recent work within the domain of information theory demonstrates that allowing coding in the cache and coded transmission from the server (referred to as coded caching) to the end users can allow for significant reductions in the number of bits transmitted from the server to the end users. The first part of this dissertation examines problems within coded caching.
The original formulation of the coded caching problem assumes that the server and the end users are connected via a single shared link. In Chapter 2, we consider a more general topology where there is a layer of relay nodes between the server and the users. We propose novel schemes for a class of such networks that satisfy a so-called resolvability property and demonstrate that the performance of our scheme is strictly better than previously proposed schemes. Moreover, the original coded caching scheme requires that each file hosted in the server be partitioned into a large number (i.e., the subpacketization level) of non-overlapping subfiles. From a practical perspective, this is problematic as it means that prior schemes are only applicable when the size of the files is extremely large. In Chapter 3, we propose a novel coded caching scheme that enjoys a significantly lower subpacketization level than prior schemes, while only suffering a marginal increase in the transmission rate. We demonstrate that several schemes with subpacketization levels that are exponentially smaller than the basic scheme can be obtained.
The second half of this dissertation deals with large scale distributed matrix computations. Distributed matrix multiplication is an important problem, especially in domains such as deep learning of neural networks. It is well recognized that the computation times on distributed clusters are often dominated by the slowest workers (called stragglers). Recently, techniques from coding theory have found applications in straggler mitigation in the specific context of matrix-matrix and matrix-vector multiplication. The computation can be completed as long as a certain number of workers (called the recovery threshold) complete their assigned tasks.
In Chapter 4, we consider matrix multiplication under the assumption that the absolute values of the matrix entries are sufficiently small. Under this condition, we present a method with a significantly smaller recovery threshold than prior work. Besides, the prior work suffers from serious numerical issues owing to the condition number of the corresponding real Vandermonde-structured recovery matrices; this condition number grows exponentially in the number of workers. In Chapter 5, we present a novel approach that leverages the properties…
Subjects/Keywords: algebra; caching; coding theory; distributed computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tang, L. (2020). Algebraic approaches for coded caching and distributed computing. (Thesis). Iowa State University. Retrieved from https://lib.dr.iastate.edu/etd/17873
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Tang, Li. “Algebraic approaches for coded caching and distributed computing.” 2020. Thesis, Iowa State University. Accessed January 21, 2021.
https://lib.dr.iastate.edu/etd/17873.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Tang, Li. “Algebraic approaches for coded caching and distributed computing.” 2020. Web. 21 Jan 2021.
Vancouver:
Tang L. Algebraic approaches for coded caching and distributed computing. [Internet] [Thesis]. Iowa State University; 2020. [cited 2021 Jan 21].
Available from: https://lib.dr.iastate.edu/etd/17873.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Tang L. Algebraic approaches for coded caching and distributed computing. [Thesis]. Iowa State University; 2020. Available from: https://lib.dr.iastate.edu/etd/17873
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Ottawa
29.
Guan, Shichao.
A Multi-layered Scheme for Distributed Simulations on the Cloud Environment
.
Degree: 2015, University of Ottawa
URL: http://hdl.handle.net/10393/32121
► In order to improve simulation performance and integrate simulation resources among geographically distributed locations, the concept of distributed simulation is proposed. Several types of distributed…
(more)
▼ In order to improve simulation performance and integrate simulation resources among geographically distributed locations, the concept of distributed simulation is proposed. Several types of distributed simulation standards, such as DIS and HLA are established to formalize simulations and achieve reusability and interoperability of simulation components. In order to implement these distributed simulation standards and manage the underlying system of distributed simulation applications, Grid Computing and Cloud Computing technologies are employed to tackle the details of operation, configuration, and maintenance of the simulation platforms in which simulation applications are deployed. However, for modelers who may not be familiar with the management of distributed systems, it is challenging to create a simulation-run-ready environment that incorporates different types of computing resources and network environments. In this thesis, we propose a new multi-layered cloud-based scheme for enabling modeling and simulation based on different distributed simulation standards. The scheme is designed to ease the management of underlying resources and achieve rapid elasticity, providing unlimited computing capability to end users; energy consumption, security, multi-user availability, scalability and deployment issues are all considered. We describe a mechanism for handling diverse network environments. With its adoption, idle public resources can easily be configured as additional computing capabilities for the local resource pool. A fast deployment model is built to relieve the migration and installation process of this platform. An energy conservation strategy is utilized to reduce the energy consumption of computing resources. Security components are also implemented to protect sensitive information and block malicious attacks in the cloud. In the experiments, the proposed scheme is compared with its corresponding grid computing platform; the cloud computing platform achieves a similar performance, but incorporates many of the cloud's advantages.
Subjects/Keywords: Cloud Computing;
Distributed Simulaiton;
Energy Consumption;
Usability
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Guan, S. (2015). A Multi-layered Scheme for Distributed Simulations on the Cloud Environment
. (Thesis). University of Ottawa. Retrieved from http://hdl.handle.net/10393/32121
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Guan, Shichao. “A Multi-layered Scheme for Distributed Simulations on the Cloud Environment
.” 2015. Thesis, University of Ottawa. Accessed January 21, 2021.
http://hdl.handle.net/10393/32121.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Guan, Shichao. “A Multi-layered Scheme for Distributed Simulations on the Cloud Environment
.” 2015. Web. 21 Jan 2021.
Vancouver:
Guan S. A Multi-layered Scheme for Distributed Simulations on the Cloud Environment
. [Internet] [Thesis]. University of Ottawa; 2015. [cited 2021 Jan 21].
Available from: http://hdl.handle.net/10393/32121.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Guan S. A Multi-layered Scheme for Distributed Simulations on the Cloud Environment
. [Thesis]. University of Ottawa; 2015. Available from: http://hdl.handle.net/10393/32121
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Delft University of Technology
30.
Huang, H. (author).
Design, Analysis and Experimental Evaluation of a Distributed Community Detection Algorithm.
Degree: MSComputer Science, 2015, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:5ef1696a-0ef8-4d4c-a807-3d0fd3247b1d
► Complex networks are a special type of graph that frequently appears in nature and in many different fields of science and engineering. Studying complex networks…
(more)
▼ Complex networks are a special type of graph that frequently appears in nature and in many different fields of science and engineering. Studying complex networks is the key to solve the problems in these fields. Complex networks have unique features which we cannot find in regular graphs, and the study of complex networks gives rise to many interesting research questions. An interesting feature to study in complex networks is community structure. Intuitively speaking, communities are group of vertices in a graph that are densely connected with each other in the same group, while sparsely connected with other nodes in the graph. The notion of community has practical significance. Many different concept and phenomenons in real world problems can be translated into communities in a graph, such as politicians with similar opinions in the political opinion network. In this thesis work, a distributed version of a popular community detection method-Louvain method-is developed using graph computation framework Apache Spark GraphX. Characteristics of this algorithm, such as convergence and quality of communities produced, are studied by both theoretical reasoning and experimental evaluation. The result shows that this algorithm can parallelize community detection effectively. This thesis also explores the possibility of using graph sampling to accelerate resolution parameter selection of a resolution-limit-free community detection method. Two sampling algorithms, random node selection and forest fire sampling algorithm, are compared. This comparison leads to suggestions of choice of sampling algorithm and parameter value of the chosen sampling algorithm.
Master of Science Computer Science
Software and Computer Technology
Electrical Engineering, Mathematics and Computer Science
Advisors/Committee Members: Hidders, A.J.H. (mentor), Krings, G. (mentor).
Subjects/Keywords: complex network; community detection; distributed computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Huang, H. (. (2015). Design, Analysis and Experimental Evaluation of a Distributed Community Detection Algorithm. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:5ef1696a-0ef8-4d4c-a807-3d0fd3247b1d
Chicago Manual of Style (16th Edition):
Huang, H (author). “Design, Analysis and Experimental Evaluation of a Distributed Community Detection Algorithm.” 2015. Masters Thesis, Delft University of Technology. Accessed January 21, 2021.
http://resolver.tudelft.nl/uuid:5ef1696a-0ef8-4d4c-a807-3d0fd3247b1d.
MLA Handbook (7th Edition):
Huang, H (author). “Design, Analysis and Experimental Evaluation of a Distributed Community Detection Algorithm.” 2015. Web. 21 Jan 2021.
Vancouver:
Huang H(. Design, Analysis and Experimental Evaluation of a Distributed Community Detection Algorithm. [Internet] [Masters thesis]. Delft University of Technology; 2015. [cited 2021 Jan 21].
Available from: http://resolver.tudelft.nl/uuid:5ef1696a-0ef8-4d4c-a807-3d0fd3247b1d.
Council of Science Editors:
Huang H(. Design, Analysis and Experimental Evaluation of a Distributed Community Detection Algorithm. [Masters Thesis]. Delft University of Technology; 2015. Available from: http://resolver.tudelft.nl/uuid:5ef1696a-0ef8-4d4c-a807-3d0fd3247b1d
◁ [1] [2] [3] [4] [5] … [31] ▶
.