You searched for subject:(computer model)
.
Showing records 1 – 30 of
1814 total matches.
◁ [1] [2] [3] [4] [5] … [61] ▶
1.
Zuffi, Silvia.
Shape Models of the Human Body for Distributed
Inference.
Degree: PhD, Computer Science, 2015, Brown University
URL: https://repository.library.brown.edu/studio/item/bdr:419434/
► In this thesis we address the problem of building shape models of the human body, in 2D and 3D, which are realistic and efficient to…
(more)
▼ In this thesis we address the problem of building
shape models of the human body, in 2D and 3D, which are realistic
and efficient to use. We focus our efforts on the human body, which
is highly articulated and has interesting shape variations, but the
approaches we present here can be applied to generic deformable and
articulated objects. To address efficiency, we constrain our models
to be part-based and have a tree-structured representation with
pairwise relationships between connected parts. This allows the
application of methods for distributed inference based on message
passing. To address realism, we exploit recent advances in
computer
graphics that represent the human body with statistical shape
models learned from 3D scans. We introduce two articulated body
models, a 2D
model, named Deformable Structures (DS), which is a
contour-based
model parameterized for 2D pose and projected shape,
and a 3D
model, named Stitchable Puppet (SP), which is a mesh-based
model parameterized for 3D pose, pose-dependent deformations and
intrinsic body shape. We have successfully applied the models to
interesting and challenging problems in
computer vision and
computer graphics, namely pose estimation from static images, pose
estimation from video sequences, pose and shape estimation from 3D
scan data. This advances the state of the art in human pose and
shape estimation and suggests that carefully defined realistic
models can be important for
computer vision. More work at the
intersection of vision and graphics is thus
encouraged.
Advisors/Committee Members: Black, Michael (Director), Sudderth, Erik (Reader), Ramanan, Deva (Reader).
Subjects/Keywords: human body model; shape model; computer vision
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zuffi, S. (2015). Shape Models of the Human Body for Distributed
Inference. (Doctoral Dissertation). Brown University. Retrieved from https://repository.library.brown.edu/studio/item/bdr:419434/
Chicago Manual of Style (16th Edition):
Zuffi, Silvia. “Shape Models of the Human Body for Distributed
Inference.” 2015. Doctoral Dissertation, Brown University. Accessed March 04, 2021.
https://repository.library.brown.edu/studio/item/bdr:419434/.
MLA Handbook (7th Edition):
Zuffi, Silvia. “Shape Models of the Human Body for Distributed
Inference.” 2015. Web. 04 Mar 2021.
Vancouver:
Zuffi S. Shape Models of the Human Body for Distributed
Inference. [Internet] [Doctoral dissertation]. Brown University; 2015. [cited 2021 Mar 04].
Available from: https://repository.library.brown.edu/studio/item/bdr:419434/.
Council of Science Editors:
Zuffi S. Shape Models of the Human Body for Distributed
Inference. [Doctoral Dissertation]. Brown University; 2015. Available from: https://repository.library.brown.edu/studio/item/bdr:419434/

Brigham Young University
2.
Guo, Yisong.
Using Agent-Based Models to Understand Multi-Operator Supervisory Control.
Degree: MS, 2012, Brigham Young University
URL: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=3969&context=etd
► As technology advances, many practical applications require human-controlled robots. For such applications, it is useful to determine the optimal number of robots an operator…
(more)
▼ As technology advances, many practical applications require human-controlled robots. For such applications, it is useful to determine the optimal number of robots an operator should control to maximize human efficiency given different situations. One way to achieve this is through computer simulations of team performance. In order to factor in various parameters that may affect team performance, an agent-based model will be used. Agent-based modeling is a computational method that enables a researcher to create, analyze, and experiment with models composed of agents that interact within an environment [12]. We construct an agent-based model of humans interacting with robots, and explore how team performance relates to different agent parameters and team organizational structures [21]. Prior work describes interaction between a single operator and multiple robots, while this work includes multi-operator performance and coordination. Model parameters include neglect time, interaction time, operator slack time, level of robot autonomy, etc. Understanding the parameters that influence team performance will be a step towards finding ways to maximize performance in real life human-robot systems.
Subjects/Keywords: computer; agent-based model; simulation; Computer Sciences
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Guo, Y. (2012). Using Agent-Based Models to Understand Multi-Operator Supervisory Control. (Masters Thesis). Brigham Young University. Retrieved from https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=3969&context=etd
Chicago Manual of Style (16th Edition):
Guo, Yisong. “Using Agent-Based Models to Understand Multi-Operator Supervisory Control.” 2012. Masters Thesis, Brigham Young University. Accessed March 04, 2021.
https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=3969&context=etd.
MLA Handbook (7th Edition):
Guo, Yisong. “Using Agent-Based Models to Understand Multi-Operator Supervisory Control.” 2012. Web. 04 Mar 2021.
Vancouver:
Guo Y. Using Agent-Based Models to Understand Multi-Operator Supervisory Control. [Internet] [Masters thesis]. Brigham Young University; 2012. [cited 2021 Mar 04].
Available from: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=3969&context=etd.
Council of Science Editors:
Guo Y. Using Agent-Based Models to Understand Multi-Operator Supervisory Control. [Masters Thesis]. Brigham Young University; 2012. Available from: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=3969&context=etd

University of Michigan
3.
Singh, Abhayendra Narayan.
A Safety-First Approach to Memory Models.
Degree: PhD, Computer Science and Engineering, 2016, University of Michigan
URL: http://hdl.handle.net/2027.42/120794
► Sequential consistency (SC) is arguably the most intuitive behavior for a shared-memory multithreaded program. It is widely accepted that language-level SC could significantly improve programmability…
(more)
▼ Sequential consistency (SC) is arguably the most intuitive behavior for a shared-memory multithreaded program. It is widely accepted that language-level SC could significantly improve programmability of a multiprocessor system. However, efficiently supporting end-to-end SC remains a challenge as it requires that both compiler and hardware optimizations preserve SC semantics.
Current concurrent languages support a relaxed memory
model that requires programmers to explicitly annotate all memory accesses that can participate in a data-race ("unsafe" accesses). This requirement allows compiler and hardware to aggressively optimize unannotated accesses, which are assumed to be data-race-free ("safe" accesses), while still preserving SC semantics. However, unannotated data races are easy for programmers to accidentally introduce and are difficult to detect, and in such cases the safety and correctness of programs are significantly compromised.
This dissertation argues instead for a safety-first approach, whereby every memory operation is treated as potentially unsafe by the compiler and hardware unless it is proven otherwise.
The first solution, DRFx memory
model, allows many common compiler and hardware optimizations (potentially SC-violating) on unsafe accesses and uses a runtime support to detect potential SC violations arising from reordering of unsafe accesses. On detecting a potential SC violation, execution is halted before the safety property is compromised.
The second solution takes a different approach and preserves SC in both compiler and hardware. Both SC-preserving compiler and hardware are also built on the safety-first approach. All memory accesses are treated as potentially unsafe by the compiler and hardware. SC-preserving hardware relies on different static and dynamic techniques to identify safe accesses. Our results indicate that supporting SC at the language level is not expensive in terms of performance and hardware complexity.
The dissertation also explores an extension of this safety-first approach for data-parallel accelerators such as Graphics Processing Units (GPUs). Significant microarchitectural differences between CPU and GPU require rethinking of efficient solutions for preserving SC in GPUs. The proposed solution based on our SC-preserving approach performs nearly on par with the baseline GPU that implements a data-race-free-0 memory
model.
Advisors/Committee Members: Narayanasamy, Satish (committee member), Zhang, Zhengya (committee member), Chen, Peter M (committee member), Wenisch, Thomas F. (committee member), Musuvathi, Madanlal (committee member).
Subjects/Keywords: Memory Consistency Model; Memory Model; DRFx memory model; Computer Science; Engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Singh, A. N. (2016). A Safety-First Approach to Memory Models. (Doctoral Dissertation). University of Michigan. Retrieved from http://hdl.handle.net/2027.42/120794
Chicago Manual of Style (16th Edition):
Singh, Abhayendra Narayan. “A Safety-First Approach to Memory Models.” 2016. Doctoral Dissertation, University of Michigan. Accessed March 04, 2021.
http://hdl.handle.net/2027.42/120794.
MLA Handbook (7th Edition):
Singh, Abhayendra Narayan. “A Safety-First Approach to Memory Models.” 2016. Web. 04 Mar 2021.
Vancouver:
Singh AN. A Safety-First Approach to Memory Models. [Internet] [Doctoral dissertation]. University of Michigan; 2016. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/2027.42/120794.
Council of Science Editors:
Singh AN. A Safety-First Approach to Memory Models. [Doctoral Dissertation]. University of Michigan; 2016. Available from: http://hdl.handle.net/2027.42/120794

University of California – Berkeley
4.
Poggio, Andrew Anthony.
The Path of the Blind Watchmaker: A Model of Evolution.
Degree: Computer Science, 2011, University of California – Berkeley
URL: http://www.escholarship.org/uc/item/47p434st
► Evolution has been described by Richard Dawkins as a blind watchmaker due to its being unconscious and random but selective and able to produce complex…
(more)
▼ Evolution has been described by Richard Dawkins as a blind watchmaker due to its being unconscious and random but selective and able to produce complex forms. Evolution from an early, primitive organism (the Last Universal Common Ancestor of all life, LUCA) to Homo sapiens is the most dramatic biological process that has taken place on Earth and knowledge of it is important to understanding many aspects of biology including disease prevention and treatment.We claim that computational biology has now reached the point that astronomy reached when it began to look backward in time to the Big Bang. Our goal is look backward in biological time, and to begin to describe, in more detail, LUCA and the evolution from LUCA to us. This evolution process is the path of the blind watchmaker.This thesis presents a novel dataset of LUCA and other early, genome sequences that we have reconstructed. These ancestors serve as reference species for our models. We develop a sequence evolution model that reflects biological processes more accurately than prior work and apply it to the ancestral genome dataset. This model uses empirical mutation probabilities for scoring alignments and includes inversion mutations. The results of this model describe the mutations that must have taken place during the evolution of our reference species.We then apply the sequence evolution results to our population evolution model. This model uses a dynamic set of population pools with related but distinct, mutating genomes reproducing sexually and asexually, and subject to speciation effects, selection pressures, and environmental carrying capacity limitations. Due to a dearth of empirical data needed to estimate model parameters of earlier organisms, our population model did not extend all the way back to LUCA; it instead extended back to a more recent, common ancestor. The results of this model are population size estimates, evolution duration estimates, and identification of critical evolution parameters and estimates of their values.We present the results of these models along with evidence for some tantalizing, if speculative, discoveries along the path. This work also identifies significant opportunities for further efforts in silico, in vitro, and in vivo.
Subjects/Keywords: Computer Science; Bioinformatics; evolution; model; population; sequence
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Poggio, A. A. (2011). The Path of the Blind Watchmaker: A Model of Evolution. (Thesis). University of California – Berkeley. Retrieved from http://www.escholarship.org/uc/item/47p434st
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Poggio, Andrew Anthony. “The Path of the Blind Watchmaker: A Model of Evolution.” 2011. Thesis, University of California – Berkeley. Accessed March 04, 2021.
http://www.escholarship.org/uc/item/47p434st.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Poggio, Andrew Anthony. “The Path of the Blind Watchmaker: A Model of Evolution.” 2011. Web. 04 Mar 2021.
Vancouver:
Poggio AA. The Path of the Blind Watchmaker: A Model of Evolution. [Internet] [Thesis]. University of California – Berkeley; 2011. [cited 2021 Mar 04].
Available from: http://www.escholarship.org/uc/item/47p434st.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Poggio AA. The Path of the Blind Watchmaker: A Model of Evolution. [Thesis]. University of California – Berkeley; 2011. Available from: http://www.escholarship.org/uc/item/47p434st
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Utah
5.
Johnson, Christopher Ray.
Generalized inverse problem in electrocardiography: theoretical, computational and experimental results;.
Degree: PhD, Biomedical Informatics;, 1990, University of Utah
URL: http://content.lib.utah.edu/cdm/singleitem/collection/etd1/id/968/rec/565
► The electrical behavior within the heart gives rise to electrostatic potentials on the body surface. These potentials are related to potentials on the epicardial surface…
(more)
▼ The electrical behavior within the heart gives rise to electrostatic potentials on the body surface. These potentials are related to potentials on the epicardial surface and are dependent on the geometry and resistive properties of the passive volume conductor between the heart and body surface. The determination of detailed information about the heart from noninvasive electrical measurements taken on the body surface is defined as the inverse problem in electrocardiography. In this study the generalized inverse problem in electrocardiography is solved for an anisotropic inhomogeneous volume conductor utilizing epicardial and torso potentials. The strategy includes a multicomponent computer model which consists of 1) a finite element program to solve the electrocardiographic field equation by utilizing the Ritz technique to reformulate the differential equation into a global integral equation; 2) a penalty method algorithm that is applied to the Dirichlet condition to assure accuracy at the boundaries; and 3) a local Tikhonov regularizing algorithm, used to constrain the solution by restoring continuous dependence of the solution on the data. This is achieved by utilizing a general discrepancy principle that makes use of measurement errors of torso potentials and discretization errors to optimize the choice of the regularization parameter. Objectives included theoretical, computational, and experimental studies of the effectiveness of the homogeneous assumption using a realistic geometry as well as optimization of the a priori regularization parameter. The studies were carried out using a concentric spheres model and a realistic torso model, which is the computational equivalent of an electrolytic tank. Forward and inverse calculations were performed using both models. It was shown that the multi-component computer model was effective for solving forward and inverse problems in anisotropic, inhomogeneous media exhibiting realistic geometry. It was also shown that the homogeneous assumption is not valid for recovering detailed electrical information on the epicardial surface through an anisotropic, inhomogeneous torso. The difficulty lies, not in the new ideas, but in escaping the old ones, which ramify, for those brought up as most of us have been, into every corner of our minds.
Subjects/Keywords: Multicomponent Computer Model
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Johnson, C. R. (1990). Generalized inverse problem in electrocardiography: theoretical, computational and experimental results;. (Doctoral Dissertation). University of Utah. Retrieved from http://content.lib.utah.edu/cdm/singleitem/collection/etd1/id/968/rec/565
Chicago Manual of Style (16th Edition):
Johnson, Christopher Ray. “Generalized inverse problem in electrocardiography: theoretical, computational and experimental results;.” 1990. Doctoral Dissertation, University of Utah. Accessed March 04, 2021.
http://content.lib.utah.edu/cdm/singleitem/collection/etd1/id/968/rec/565.
MLA Handbook (7th Edition):
Johnson, Christopher Ray. “Generalized inverse problem in electrocardiography: theoretical, computational and experimental results;.” 1990. Web. 04 Mar 2021.
Vancouver:
Johnson CR. Generalized inverse problem in electrocardiography: theoretical, computational and experimental results;. [Internet] [Doctoral dissertation]. University of Utah; 1990. [cited 2021 Mar 04].
Available from: http://content.lib.utah.edu/cdm/singleitem/collection/etd1/id/968/rec/565.
Council of Science Editors:
Johnson CR. Generalized inverse problem in electrocardiography: theoretical, computational and experimental results;. [Doctoral Dissertation]. University of Utah; 1990. Available from: http://content.lib.utah.edu/cdm/singleitem/collection/etd1/id/968/rec/565

University of Manchester
6.
Banks, Matthew.
On Quantitative Validation and Optimisation of Physically
Based Deformable Models in Computer Graphics.
Degree: 2020, University of Manchester
URL: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:325021
► Physically based deformable models (PBDMs) in computer graphics (CG) simulate how virtual solid bodies respond under external loads in computer applications that must be computationally…
(more)
▼ Physically based deformable models (PBDMs) in
computer graphics (CG) simulate how virtual solid bodies respond
under external loads in
computer applications that must be
computationally efficient enough so as to be user-interactive.
Balance laws from the continuum mechanics theory govern the
behaviour of solids. The modelling and simulation of this behaviour
is traditionally performed using the finite element method (FEM),
which solves to full accuracy and is therefore computationally
expensive. This typically makes it unsuitable for IVEs and so
different PBDMs are researched that are less expensive. These PBDMs
make modelling assumptions that simplify the theory and their
numerical results are less accurate than FEM. In this thesis we
present a novel software framework that allows us to quantify the
accuracy of any deformation history of any PBDM as a result of its
modelling simplifications. In the majority of previous studies,
validation is qualitative (through visual plausibility), which is
necessarily user subjective. We develop a novel, objective,
quantitative validation of deformation histories procedure (QVDH)
to quantify the accuracy of any PBDM with an error between 0 and 1.
We test QVDH in 3D cantilever and cloth scenarios, both of which
are popular scenarios in the CG literature. The framework is shown
to yield a high accuracy score for a simplified
model that can be
analytically derived from the reference
model, indicating that the
framework is reliable. We then extend the framework to optimise the
model properties of PBDMs (that determine the material response) by
minimising the error measured by QVDH. Results are in good
agreement with analytically derived results, showing the
effectiveness of the procedure. Finally, we use the framework to
explore adaptive PBDMs - in particular PBDMs that switch to other
PBDMs at runtime - and demonstrate how switching can successfully
be implemented to increase the accuracy of a deformation according
to QVDH. The software framework is sufficiently general to be
applicable to a wide variety of deformation scenarios. It has
replaceable components that can help to improve the quality of the
QVDH procedure. We believe that QVDH has many uses beyond those
explored in this thesis.
Advisors/Committee Members: HAZEL, ANDREW A, Riley, Graham, Hazel, Andrew.
Subjects/Keywords: Physics Based Animation; Computer Graphics; Model Validation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Banks, M. (2020). On Quantitative Validation and Optimisation of Physically
Based Deformable Models in Computer Graphics. (Doctoral Dissertation). University of Manchester. Retrieved from http://www.manchester.ac.uk/escholar/uk-ac-man-scw:325021
Chicago Manual of Style (16th Edition):
Banks, Matthew. “On Quantitative Validation and Optimisation of Physically
Based Deformable Models in Computer Graphics.” 2020. Doctoral Dissertation, University of Manchester. Accessed March 04, 2021.
http://www.manchester.ac.uk/escholar/uk-ac-man-scw:325021.
MLA Handbook (7th Edition):
Banks, Matthew. “On Quantitative Validation and Optimisation of Physically
Based Deformable Models in Computer Graphics.” 2020. Web. 04 Mar 2021.
Vancouver:
Banks M. On Quantitative Validation and Optimisation of Physically
Based Deformable Models in Computer Graphics. [Internet] [Doctoral dissertation]. University of Manchester; 2020. [cited 2021 Mar 04].
Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:325021.
Council of Science Editors:
Banks M. On Quantitative Validation and Optimisation of Physically
Based Deformable Models in Computer Graphics. [Doctoral Dissertation]. University of Manchester; 2020. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:325021

Western Kentucky University
7.
Rachamadugu, Sairaj.
Manipulation of 3D knotted polygons.
Degree: MS, Department of Mathematics and Computer Science, 2012, Western Kentucky University
URL: https://digitalcommons.wku.edu/theses/1162
► This thesis discusses the development of software architecture to support the computational investigation of random polygons in 3 space. The random polygons themselves are…
(more)
▼ This thesis discusses the development of software architecture to support the computational investigation of random polygons in 3 space. The random polygons themselves are a simple
model of long polymer chains. (A DNA molecule is one example of a polymer.)
This software architecture includes "building blocks" which specify the actual manipulations and computations to be performed, and a structural framework which allows the user to specify which manipulations/computations to perform, in which order and with how many repetitions. The overall framework is designed in such a way that new building blocks can easily be added in the future. The development of three different building blocks to be used in this architecture which are entitled: Reducer, Lengthener and OutsideInLengthener are also discussed in this thesis. These building blocks manipulate the existing polygons - increasing or decreasing their size.
Advisors/Committee Members: Dr. Uta Ziegler, Director, Dr. Claus Ernst, Dr. Guangming Xing.
Subjects/Keywords: Polymer chains; DNA model; Computer Sciences
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rachamadugu, S. (2012). Manipulation of 3D knotted polygons. (Masters Thesis). Western Kentucky University. Retrieved from https://digitalcommons.wku.edu/theses/1162
Chicago Manual of Style (16th Edition):
Rachamadugu, Sairaj. “Manipulation of 3D knotted polygons.” 2012. Masters Thesis, Western Kentucky University. Accessed March 04, 2021.
https://digitalcommons.wku.edu/theses/1162.
MLA Handbook (7th Edition):
Rachamadugu, Sairaj. “Manipulation of 3D knotted polygons.” 2012. Web. 04 Mar 2021.
Vancouver:
Rachamadugu S. Manipulation of 3D knotted polygons. [Internet] [Masters thesis]. Western Kentucky University; 2012. [cited 2021 Mar 04].
Available from: https://digitalcommons.wku.edu/theses/1162.
Council of Science Editors:
Rachamadugu S. Manipulation of 3D knotted polygons. [Masters Thesis]. Western Kentucky University; 2012. Available from: https://digitalcommons.wku.edu/theses/1162

University of Colorado
8.
Hassan, Zyad.
Incremental, Inductive Model Checking.
Degree: PhD, Electrical, Computer & Energy Engineering, 2014, University of Colorado
URL: https://scholar.colorado.edu/ecen_gradetds/86
► Model checking has become a widely adopted approach for the verification of hardware designs. The ever increasing complexity of these designs creates a continuous…
(more)
▼ Model checking has become a widely adopted approach for the verification of hardware designs. The ever increasing complexity of these designs creates a continuous need for faster
model checkers that are capable of verifying designs within reasonable time frames to reduce time to market. IC3, the recently developed, very successful algorithm for
model checking safety properties, introduced a new approach to
model checking: incremental, inductive verification (IIV). The IIV approach possesses several attractive traits, such as stability and not relying on high-effort reasoning, that make its usage in
model checking very appealing, which motivated the development of another algorithm that follows the IIV approach for
model checking ω-regular languages. The algorithm, Fair, has been shown to be capable of dealing with designs beyond the reach of its predecessors.
This thesis explores IIV as a promising approach to
model checking. After identifying IIV's main elements, the thesis presents an IIV-based
model checking algorithm for CTL: the first practical SAT-based algorithm for branching time properties. The algorithm, IICTL, is shown to complement state-of-the-art BDD-based CTL algorithms on a large set of benchmarks. In addition to fulfilling the need for a SAT-based CTL algorithm, IICTL highlights ways in which IIV algorithms can be improved; one of these ways is addressing counterexamples to generalization, which is explored in the context of IC3 and is shown to improve the algorithm's performance considerably. The thesis then addresses an important question: for properties that fall into the scope of more than one IIV algorithm, do these algorithms behave identically? The question is answered negatively, pointing out that the IIV framework admits multiple strategies and that there is a wide spectrum of possible algorithms that all follow the IIV approach. For example, all properties in the common fragment of LTL and CTL—an important class of properties—can be checked with Fair and IICTL. However, empirical evidence presented in the thesis suggests that neither algorithm is always superior to the other, which points out the importance of being flexible in deciding the strategy to apply to a given problem.
Advisors/Committee Members: Fabio Somenzi, Aaron R. Bradley, Pavol Cerny, Sriram Sankaranarayanan, Niklas Sorensson.
Subjects/Keywords: Formal Verification; Model Checking; Satisfiability; Computer Engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hassan, Z. (2014). Incremental, Inductive Model Checking. (Doctoral Dissertation). University of Colorado. Retrieved from https://scholar.colorado.edu/ecen_gradetds/86
Chicago Manual of Style (16th Edition):
Hassan, Zyad. “Incremental, Inductive Model Checking.” 2014. Doctoral Dissertation, University of Colorado. Accessed March 04, 2021.
https://scholar.colorado.edu/ecen_gradetds/86.
MLA Handbook (7th Edition):
Hassan, Zyad. “Incremental, Inductive Model Checking.” 2014. Web. 04 Mar 2021.
Vancouver:
Hassan Z. Incremental, Inductive Model Checking. [Internet] [Doctoral dissertation]. University of Colorado; 2014. [cited 2021 Mar 04].
Available from: https://scholar.colorado.edu/ecen_gradetds/86.
Council of Science Editors:
Hassan Z. Incremental, Inductive Model Checking. [Doctoral Dissertation]. University of Colorado; 2014. Available from: https://scholar.colorado.edu/ecen_gradetds/86

KTH
9.
Zhang, Kerry.
Viewpoint and Topic Modeling of Current Events.
Degree: Computer Science and Communication (CSC), 2016, KTH
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190083
► There are multiple sides to every story, and while statistical topic models have been highly successful at topically summarizing the stories in corpora of…
(more)
▼ There are multiple sides to every story, and while statistical topic models have been highly successful at topically summarizing the stories in corpora of text documents, they do not explicitly address the issue of learning the different sides, the viewpoints, expressed in the documents. In this paper, we show how these viewpoints can be learned completely unsupervised and represented in a human interpretable form. We use a novel approach of applying CorrLDA2 for this purpose, which learns topic-viewpoint relations that can be used to form groups of topics, where each group represents a viewpoint. A corpus of documents about the Israeli-Palestinian conflict is then used to demonstrate how a Palestinian and an Israeli viewpoint can be learned. By leveraging the magnitudes and signs of the feature weights of a linear SVM, we introduce a principled method to evaluate associations between topics and viewpoints. With this, we demonstrate, both quantitatively and qualitatively, that the learned topic groups are contextually coherent, and form consistently correct topic-viewpoint associations.
I detta kandidatexamensarbete demonstrerar vi hur åsikter som uttrycks i artiklar om aktuella händelser kan modeleras med en oövervakad inlärningsmetod. Vi anpassar CorrLDA2-modellen för detta syfte, som kan lära sig vilka ämnen som diskuteras i en samling av textdokument, vilka åsikter som uttrycks, samt relationer mellan ämnen och åsikter. Med hjälp av dessa relationer kan vi sedan bilda grupper av ämnen, där varje grupp är associerad med en åsikt. Detta skapar en representation av åsikter som är tolkbar för människor. Vi demonstrerar detta med hjälp av en samling av dokument som handlar om Israel-Palestinakonflikten, genom att bilda en grupp av ämnen som representerar den palestinska åsikten, samt en grupp som representerar den isrealiska åsikten. Vi introducerar sedan en ny evalueringsmetod, som använder sig av magnituden samt tecknen på attributsvikter från en linjär SVM. Med hjälp av detta visar vi, både kvantitativt och kvalitativt, att de inlärda relationerna mellan ämenen och åsikter bildar sammanhängande ämnesgrupper, samt konsikvent korrekta associationer mellan ämnen och åsikter.
This is the second time I am submitting my thesis here on DiVa. I didn't attach the actual thesis document (i.e. the pdf file) last time because we were submitting on for publication in a scientific conference and I wanted to respect the double blind review process and not publish anything before. Now, I want to publish the thesis document here on DiVa.
Subjects/Keywords: viewpoint topic model; Computer Sciences; Datavetenskap (datalogi)
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhang, K. (2016). Viewpoint and Topic Modeling of Current Events. (Thesis). KTH. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190083
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Zhang, Kerry. “Viewpoint and Topic Modeling of Current Events.” 2016. Thesis, KTH. Accessed March 04, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190083.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Zhang, Kerry. “Viewpoint and Topic Modeling of Current Events.” 2016. Web. 04 Mar 2021.
Vancouver:
Zhang K. Viewpoint and Topic Modeling of Current Events. [Internet] [Thesis]. KTH; 2016. [cited 2021 Mar 04].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190083.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Zhang K. Viewpoint and Topic Modeling of Current Events. [Thesis]. KTH; 2016. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190083
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Iowa State University
10.
Suvorov, Yuly.
A model checking approach for analyzing and identifying intervention policies to counter infection propagation over networks.
Degree: 2011, Iowa State University
URL: https://lib.dr.iastate.edu/etd/10431
► The spread of infections (disease, ideas, fires, etc.) in a network (group of people, electronic network, forest, etc.) can be modeled by the evolution of…
(more)
▼ The spread of infections (disease, ideas, fires, etc.) in a network (group of people, electronic network, forest, etc.) can be modeled by the evolution of states of nodes in a graph defined as a function of the states of the other nodes in the graph. Given an initial configuration of the graph with a subset of the nodes infected, a propagation function that specifies how the states of the nodes change over time, and a quarantine function that specifies the generation of regions centered on the infected nodes, from which the infection cannot spread; we identify and verify intervention policies designed to contain the propagation of the infection over the network. The approach can be used to determine an effective policy in such a scenario.
Subjects/Keywords: Disease; Infection; Model Checking; Policies; Computer Sciences
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Suvorov, Y. (2011). A model checking approach for analyzing and identifying intervention policies to counter infection propagation over networks. (Thesis). Iowa State University. Retrieved from https://lib.dr.iastate.edu/etd/10431
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Suvorov, Yuly. “A model checking approach for analyzing and identifying intervention policies to counter infection propagation over networks.” 2011. Thesis, Iowa State University. Accessed March 04, 2021.
https://lib.dr.iastate.edu/etd/10431.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Suvorov, Yuly. “A model checking approach for analyzing and identifying intervention policies to counter infection propagation over networks.” 2011. Web. 04 Mar 2021.
Vancouver:
Suvorov Y. A model checking approach for analyzing and identifying intervention policies to counter infection propagation over networks. [Internet] [Thesis]. Iowa State University; 2011. [cited 2021 Mar 04].
Available from: https://lib.dr.iastate.edu/etd/10431.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Suvorov Y. A model checking approach for analyzing and identifying intervention policies to counter infection propagation over networks. [Thesis]. Iowa State University; 2011. Available from: https://lib.dr.iastate.edu/etd/10431
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Manchester
11.
Banks, Matthew.
On quantitative validation and optimisation of physically based deformable models in computer graphics.
Degree: PhD, 2020, University of Manchester
URL: https://www.research.manchester.ac.uk/portal/en/theses/on-quantitative-validation-and-optimisation-of-physically-based-deformable-models-in-computer-graphics(9cbfda85-dc11-4e5b-a5d6-08181e8c69dc).html
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.809460
► Physically based deformable models (PBDMs) in computer graphics (CG) simulate how virtual solid bodies respond under external loads in computer applications that must be computationally…
(more)
▼ Physically based deformable models (PBDMs) in computer graphics (CG) simulate how virtual solid bodies respond under external loads in computer applications that must be computationally efficient enough so as to be user-interactive. Balance laws from the continuum mechanics theory govern the behaviour of solids. The modelling and simulation of this behaviour is traditionally performed using the finite element method (FEM), which solves to full accuracy and is therefore computationally expensive. This typically makes it unsuitable for IVEs and so different PBDMs are researched that are less expensive. These PBDMs make modelling assumptions that simplify the theory and their numerical results are less accurate than FEM. In this thesis we present a novel software framework that allows us to quantify the accuracy of any deformation history of any PBDM as a result of its modelling simplifications. In the majority of previous studies, validation is qualitative (through visual plausibility), which is necessarily user subjective. We develop a novel, objective, quantitative validation of deformation histories procedure (QVDH) to quantify the accuracy of any PBDM with an error between 0 and 1. We test QVDH in 3D cantilever and cloth scenarios, both of which are popular scenarios in the CG literature. The framework is shown to yield a high accuracy score for a simplified model that can be analytically derived from the reference model, indicating that the framework is reliable. We then extend the framework to optimise the model properties of PBDMs (that determine the material response) by minimising the error measured by QVDH. Results are in good agreement with analytically derived results, showing the effectiveness of the procedure. Finally, we use the framework to explore adaptive PBDMs - in particular PBDMs that switch to other PBDMs at runtime - and demonstrate how switching can successfully be implemented to increase the accuracy of a deformation according to QVDH. The software framework is sufficiently general to be applicable to a wide variety of deformation scenarios. It has replaceable components that can help to improve the quality of the QVDH procedure. We believe that QVDH has many uses beyond those explored in this thesis.
Subjects/Keywords: Model Validation; Physics Based Animation; Computer Graphics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Banks, M. (2020). On quantitative validation and optimisation of physically based deformable models in computer graphics. (Doctoral Dissertation). University of Manchester. Retrieved from https://www.research.manchester.ac.uk/portal/en/theses/on-quantitative-validation-and-optimisation-of-physically-based-deformable-models-in-computer-graphics(9cbfda85-dc11-4e5b-a5d6-08181e8c69dc).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.809460
Chicago Manual of Style (16th Edition):
Banks, Matthew. “On quantitative validation and optimisation of physically based deformable models in computer graphics.” 2020. Doctoral Dissertation, University of Manchester. Accessed March 04, 2021.
https://www.research.manchester.ac.uk/portal/en/theses/on-quantitative-validation-and-optimisation-of-physically-based-deformable-models-in-computer-graphics(9cbfda85-dc11-4e5b-a5d6-08181e8c69dc).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.809460.
MLA Handbook (7th Edition):
Banks, Matthew. “On quantitative validation and optimisation of physically based deformable models in computer graphics.” 2020. Web. 04 Mar 2021.
Vancouver:
Banks M. On quantitative validation and optimisation of physically based deformable models in computer graphics. [Internet] [Doctoral dissertation]. University of Manchester; 2020. [cited 2021 Mar 04].
Available from: https://www.research.manchester.ac.uk/portal/en/theses/on-quantitative-validation-and-optimisation-of-physically-based-deformable-models-in-computer-graphics(9cbfda85-dc11-4e5b-a5d6-08181e8c69dc).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.809460.
Council of Science Editors:
Banks M. On quantitative validation and optimisation of physically based deformable models in computer graphics. [Doctoral Dissertation]. University of Manchester; 2020. Available from: https://www.research.manchester.ac.uk/portal/en/theses/on-quantitative-validation-and-optimisation-of-physically-based-deformable-models-in-computer-graphics(9cbfda85-dc11-4e5b-a5d6-08181e8c69dc).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.809460

Clemson University
12.
Bhuiyan, Mohammad.
PERFORMANCE ANALYSIS AND FITNESS OF GPGPU AND MULTICORE ARCHITECTURES FOR SCIENTIFIC APPLICATIONS.
Degree: PhD, Computer Engineering, 2011, Clemson University
URL: https://tigerprints.clemson.edu/all_dissertations/827
► Recent trends in computing architecture development have focused on exploiting task- and data-level parallelism from applications. Major hardware vendors are experimenting with novel parallel architectures,…
(more)
▼ Recent trends in computing architecture development have focused on exploiting task- and data-level parallelism from applications. Major hardware vendors are experimenting with novel parallel architectures, such as the Many Integrated Core (MIC) from Intel that integrates 50 or more x86 processors on a single chip, the Accelerated Processing Unit from AMD that integrates a multicore x86 processor with a graphical processing unit (GPU), and many other initiatives from other hardware vendors that are underway. Therefore, various types of architectures are available to developers for accelerating an application. A performance
model that predicts the suitability of the architecture for accelerating an application would be very helpful prior to implementation. Thus, in this research, a Fitness
model that ranks the potential performance of accelerators for an application is proposed. Then the Fitness
model is extended using statistical multiple regression to
model both the runtime performance of accelerators and the impact of programming models on accelerator performance with high degree of accuracy. We have validated both performance models for all the case studies. The error rate of these models, calculated using the experimental performance data, is tolerable in the high-performance computing field. In this research, to develop and validate the two performance models we have also analyzed the performance of several multicore CPUs and GPGPU architectures and the corresponding programming models using multiple case studies. The first case study used in this research is a matrix-matrix multiplication algorithm. By varying the size of the matrix from a small size to a very large size, the performance of the multicore and GPGPU architectures are studied. The second case study used in this research is a biological spiking neural network (SNN), implemented with four neuron models that have varying requirements for communication and computation making them useful for performance analysis of the hardware platforms. We report and analyze the performance variation of the four popular accelerators (Intel Xeon, AMD Opteron, Nvidia Fermi, and IBM PS3) and four advanced CPU architectures (Intel 32 core, AMD 32 core, IBM 16 core, and SUN 32 core) with problem size (matrix and network size) scaling, available optimization techniques and execution configuration. This thorough analysis provides insight regarding how the performance of an accelerator is affected by problem size, optimization techniques, and accelerator configuration. We have analyzed the performance impact of four popular multicore parallel programming models, POSIX-threading, Open Multi-Processing (OpenMP), Open Computing Language (OpenCL), and Concurrency Runtime on an Intel i7 multicore architecture; and, two GPGPU programming models, Compute Unified Device Architecture (CUDA) and OpenCL, on a NVIDIA GPGPU. With the broad study conducted using a wide range of application complexity, multiple optimizations, and varying problem size,…
Advisors/Committee Members: Smith, Melissa C, Ligon , Walter B, Shen , Haying (Helen), Medlock , Jan, Oehsen , James.
Subjects/Keywords: Fitness; Performance Model; Regression; SNN; Computer Engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bhuiyan, M. (2011). PERFORMANCE ANALYSIS AND FITNESS OF GPGPU AND MULTICORE ARCHITECTURES FOR SCIENTIFIC APPLICATIONS. (Doctoral Dissertation). Clemson University. Retrieved from https://tigerprints.clemson.edu/all_dissertations/827
Chicago Manual of Style (16th Edition):
Bhuiyan, Mohammad. “PERFORMANCE ANALYSIS AND FITNESS OF GPGPU AND MULTICORE ARCHITECTURES FOR SCIENTIFIC APPLICATIONS.” 2011. Doctoral Dissertation, Clemson University. Accessed March 04, 2021.
https://tigerprints.clemson.edu/all_dissertations/827.
MLA Handbook (7th Edition):
Bhuiyan, Mohammad. “PERFORMANCE ANALYSIS AND FITNESS OF GPGPU AND MULTICORE ARCHITECTURES FOR SCIENTIFIC APPLICATIONS.” 2011. Web. 04 Mar 2021.
Vancouver:
Bhuiyan M. PERFORMANCE ANALYSIS AND FITNESS OF GPGPU AND MULTICORE ARCHITECTURES FOR SCIENTIFIC APPLICATIONS. [Internet] [Doctoral dissertation]. Clemson University; 2011. [cited 2021 Mar 04].
Available from: https://tigerprints.clemson.edu/all_dissertations/827.
Council of Science Editors:
Bhuiyan M. PERFORMANCE ANALYSIS AND FITNESS OF GPGPU AND MULTICORE ARCHITECTURES FOR SCIENTIFIC APPLICATIONS. [Doctoral Dissertation]. Clemson University; 2011. Available from: https://tigerprints.clemson.edu/all_dissertations/827

University of Maryland
13.
Filimonov, Denis.
Decision Tree-based Syntactic Language Modeling.
Degree: Computer Science, 2011, University of Maryland
URL: http://hdl.handle.net/1903/12215
► Statistical Language Modeling is an integral part of many natural language processing applications, such as Automatic Speech Recognition (ASR) and Machine Translation. N-gram language models…
(more)
▼ Statistical Language Modeling is an integral part of many natural language processing applications, such as Automatic Speech Recognition (ASR) and Machine Translation. N-gram language models dominate the field, despite having an extremely shallow view of language – a Markov chain of words. In this thesis, we develop and evaluate a joint language
model that incorporates syntactic and lexical information in a effort to ``put language back into language modeling.'' Our main goal is to demonstrate that such a
model is not only effective but can be made scalable and tractable. We utilize decision trees to tackle the problem of sparse parameter estimation which is exacerbated by the use of syntactic information jointly with word context. While decision trees have been previously applied to language modeling, there has been little analysis of factors affecting decision tree induction and probability estimation for language modeling. In this thesis, we analyze several aspects that affect decision tree-based language modeling, with an emphasis on syntactic language modeling. We then propose improvements to the decision tree induction algorithm based on our analysis, as well as the methods for constructing forest models – models consisting of multiple decision trees. Finally, we evaluate the impact of our syntactic language
model on large scale Speech Recognition and Machine Translation tasks.
In this thesis, we also address a number of engineering problems associated with the joint syntactic language
model in order to make it tractable. Particularly, we propose a novel decoding algorithm that exploits the decision tree structure to eliminate unnecessary computation. We also propose and evaluate an approximation of our syntactic
model by word n-grams – the approximation that makes it possible to incorporate our
model directly into the CDEC Machine Translation decoder rather than using the
model for rescoring hypotheses produced using an n-gram
model.
Advisors/Committee Members: Harper, Mary P (advisor), Resnik, Philip (advisor).
Subjects/Keywords: Computer science; decision tree; syntactic language model
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Filimonov, D. (2011). Decision Tree-based Syntactic Language Modeling. (Thesis). University of Maryland. Retrieved from http://hdl.handle.net/1903/12215
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Filimonov, Denis. “Decision Tree-based Syntactic Language Modeling.” 2011. Thesis, University of Maryland. Accessed March 04, 2021.
http://hdl.handle.net/1903/12215.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Filimonov, Denis. “Decision Tree-based Syntactic Language Modeling.” 2011. Web. 04 Mar 2021.
Vancouver:
Filimonov D. Decision Tree-based Syntactic Language Modeling. [Internet] [Thesis]. University of Maryland; 2011. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1903/12215.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Filimonov D. Decision Tree-based Syntactic Language Modeling. [Thesis]. University of Maryland; 2011. Available from: http://hdl.handle.net/1903/12215
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Pennsylvania
14.
Ahn, Kook Jin.
Analyzing Massive Graphs in the Semi-streaming Model.
Degree: 2013, University of Pennsylvania
URL: https://repository.upenn.edu/edissertations/606
► Massive graphs arise in a many scenarios, for example, traffic data analysis in large networks, large scale scientific experiments, and clustering of large data sets.…
(more)
▼ Massive graphs arise in a many scenarios, for example,
traffic data analysis in large networks, large scale scientific
experiments, and clustering of large data sets.
The semi-streaming model was proposed for processing massive graphs. In the semi-streaming model, we have a random
accessible memory which is near-linear in the number of vertices.
The input graph (or equivalently, edges in the graph)
is presented as a sequential list of edges (insertion-only model)
or edge insertions and deletions (dynamic model). The list
is read-only but we may make multiple passes over the list.
There has been a few results in the insertion-only model
such as computing distance spanners and approximating
the maximum matching.
In this thesis, we present some algorithms and techniques
for (i) solving more complex problems in the semi-streaming model,
(for example, problems in the dynamic model) and (ii) having
better solutions for the problems which have been studied
(for example, the maximum matching problem). In course of both
of these, we develop new techniques with broad applications and
explore the rich trade-offs between the complexity of models
(insertion-only streams vs. dynamic streams), the number
of passes, space, accuracy, and running time.
1. We initiate the study of dynamic graph streams.
We start with basic problems such as the connectivity
problem and computing the minimum spanning tree.
These problems are
trivial in the insertion-only model. However, they require
non-trivial (and multiple passes for computing the exact minimum
spanning tree) algorithms in the
dynamic model.
2. Second, we present a graph sparsification algorithm in the
semi-streaming model. A graph sparsification
is a sparse graph that approximately preserves
all the cut values of a graph.
Such a graph acts as an oracle for solving cut-related problems,
for example, the minimum cut problem and the multicut problem.
Our algorithm produce a graph sparsification with high probability
in one pass.
3. Third, we use the primal-dual algorithms
to develop the semi-streaming algorithms.
The primal-dual algorithms have been widely accepted
as a framework for solving linear programs
and semidefinite programs faster.
In contrast, we apply the method for reducing space and
number of passes in addition to reducing the running time.
We also present some examples that arise in applications
and show how to apply the techniques:
the multicut problem, the correlation clustering problem,
and the maximum matching problem. As a consequence,
we also develop near-linear time algorithms for the b-matching
problems which were not known before.
Subjects/Keywords: Algorithms; Graphs; Semi-streaming Model; Computer Sciences
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ahn, K. J. (2013). Analyzing Massive Graphs in the Semi-streaming Model. (Thesis). University of Pennsylvania. Retrieved from https://repository.upenn.edu/edissertations/606
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ahn, Kook Jin. “Analyzing Massive Graphs in the Semi-streaming Model.” 2013. Thesis, University of Pennsylvania. Accessed March 04, 2021.
https://repository.upenn.edu/edissertations/606.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ahn, Kook Jin. “Analyzing Massive Graphs in the Semi-streaming Model.” 2013. Web. 04 Mar 2021.
Vancouver:
Ahn KJ. Analyzing Massive Graphs in the Semi-streaming Model. [Internet] [Thesis]. University of Pennsylvania; 2013. [cited 2021 Mar 04].
Available from: https://repository.upenn.edu/edissertations/606.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ahn KJ. Analyzing Massive Graphs in the Semi-streaming Model. [Thesis]. University of Pennsylvania; 2013. Available from: https://repository.upenn.edu/edissertations/606
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Utah State University
15.
Mahajan, Amitesh.
A Model for Bioaugmented Anaerobic Granule.
Degree: MS, Computer Science, 2018, Utah State University
URL: https://digitalcommons.usu.edu/etd/7045
► In this study, we have created a simulation model which is concerned about digesting cellulose, as a major component of microalgae in a bioreactor.…
(more)
▼ In this study, we have created a simulation
model which is concerned about digesting cellulose, as a major component of microalgae in a bioreactor. This
model is designed to generate a computational
model that simulates the process of granulation in anaerobic sludge and aims to investigate scenarios of possible granular bioaugmentation. Once a mature granule is formed, pro- tein is used as an alternative substrate that will be supplied to a mature granule. Protein, being a main component of cyanobacteria, will promote growth and incorporation of a cell type that can degrade protein (selective pressure). The
model developed in a cDynoMiCs simulation environment successfully demonstrated the process of granule formation and bioaugmentation in an Anaerobic granule. Bioaugmentation is a common strategy in the field of wastewater treatment, used to in- troduce a new metabolic capability to either aerobic or anaerobic granules. The end product of our work is a
model that can visually demonstrate varying stratifications of different trophic microbial groups that will be of help for the engineers and researchers, who are operating both laboratory and industrial-scale anaerobic digesters and wish to enhance reactor performance. The working
model that we have developed has been validated using the existing literature and lab experiments. The
model successfully demonstrates granulation in a cellobiose fed system with formation of 0.63 mm mature granule in 59 days with the production of good amount of methane that could be used commercially as a green fuel. This
model is extended to perform bioaugmentation by chaining different simulations.
Advisors/Committee Members: Nicholas Flann, Vladimir Kulyukin, Xiaojun Qi, ;.
Subjects/Keywords: Model; Computational Biology; Simulation; Bioaugmentation; Computer Sciences
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mahajan, A. (2018). A Model for Bioaugmented Anaerobic Granule. (Masters Thesis). Utah State University. Retrieved from https://digitalcommons.usu.edu/etd/7045
Chicago Manual of Style (16th Edition):
Mahajan, Amitesh. “A Model for Bioaugmented Anaerobic Granule.” 2018. Masters Thesis, Utah State University. Accessed March 04, 2021.
https://digitalcommons.usu.edu/etd/7045.
MLA Handbook (7th Edition):
Mahajan, Amitesh. “A Model for Bioaugmented Anaerobic Granule.” 2018. Web. 04 Mar 2021.
Vancouver:
Mahajan A. A Model for Bioaugmented Anaerobic Granule. [Internet] [Masters thesis]. Utah State University; 2018. [cited 2021 Mar 04].
Available from: https://digitalcommons.usu.edu/etd/7045.
Council of Science Editors:
Mahajan A. A Model for Bioaugmented Anaerobic Granule. [Masters Thesis]. Utah State University; 2018. Available from: https://digitalcommons.usu.edu/etd/7045

University of California – Berkeley
16.
Bui, Dai Nguyen.
Scheduling and Optimizing Stream Programs on Multicore Machines by Exploiting High-Level Abstractions.
Degree: Electrical Engineering & Computer Sciences, 2013, University of California – Berkeley
URL: http://www.escholarship.org/uc/item/70x9m62f
► Real-time streaming of HD movies and TV via YouTube, Netflix, Apple TV and Xbox Live is gaining popularity. Stream programs often consume considerable amounts of…
(more)
▼ Real-time streaming of HD movies and TV via YouTube, Netflix, Apple TV and Xbox Live is gaining popularity. Stream programs often consume considerable amounts of energy due to their compute-intensive nature. Making stream programs energy-efficient is important, especially for energy-constrained computing devices such as mobile phones and tablets. The first part of this thesis focuses on exploiting the popular Synchronous Dataflow (SDF) high-level abstraction of stream programs to design adaptive stream programs for energy reduction on multicore machines. Observing that IO rates of stream programs can vary at runtime, we seek to make stream programs adaptive by transforming their internal structures to adapt required occupied computing resources, e.g., cores and memory, to workload changes at runtime. Our experiments show that adapting stream programs to IO rate changes can lead to significant energy reduction. In addition, we also show that the modularity and static attributes of stream programs' abstraction not only help map stream programs on multicore machines more easily but also enable energy-efficient routing schemes of high-bandwidth stream traffic on the interconnection fabric, such as networks on-chip.While SDF abstractions can help optimize stream programs on multicore machines, SDF is more suitable for describing stream data-intensive computations such as FFT, DCT, and FIR and so on. Modern stream operations such as MPEG2 or MP3 encoders/decoders are often more sophisticated and composed of multiple such computations. Enabling operation synchronization between different such computations with different semantics leads to the need for control messaging. We extend previous work on control messaging and give a formal definition for control message latency via the semantics of information wavefronts. This control-operation-integrated SDF (COSDF) is able to model sophisticated stream programs more precisely. However, the conventional scheduling method developed for SDF is not sufficient to schedule COSDF applications. To schedule COSDF applications, we develop a scheduling method using dependency graphs and applying a periodic graph theory, based on reduced dependency graphs (RDG). This RDG scheduling method also helps extract parallelism of stream programs. The more precise abstraction of COSDF is expected to help synthesize and generate sophisticated stream programs more efficiently.Although the SDF modularity property also improves programmability, it can come at a price of efficiency when SDF models are not compiled and run using model-based design environments. However, compiling large SDF models to mitigate the inefficiency can be prohibitive in the situations where even a small change in a model may lead to large recompilation overhead. We tackle the problem by proposing a method for incrementally compiling large SDF models that faithfully captures the executions of original SDF models to avoid potential artificial deadlocks of a naive compilation method.
Subjects/Keywords: Computer science; Electrical engineering; Computer engineering; DSP; Energy; Language; Model-based; Programming Model; Streaming
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bui, D. N. (2013). Scheduling and Optimizing Stream Programs on Multicore Machines by Exploiting High-Level Abstractions. (Thesis). University of California – Berkeley. Retrieved from http://www.escholarship.org/uc/item/70x9m62f
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Bui, Dai Nguyen. “Scheduling and Optimizing Stream Programs on Multicore Machines by Exploiting High-Level Abstractions.” 2013. Thesis, University of California – Berkeley. Accessed March 04, 2021.
http://www.escholarship.org/uc/item/70x9m62f.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Bui, Dai Nguyen. “Scheduling and Optimizing Stream Programs on Multicore Machines by Exploiting High-Level Abstractions.” 2013. Web. 04 Mar 2021.
Vancouver:
Bui DN. Scheduling and Optimizing Stream Programs on Multicore Machines by Exploiting High-Level Abstractions. [Internet] [Thesis]. University of California – Berkeley; 2013. [cited 2021 Mar 04].
Available from: http://www.escholarship.org/uc/item/70x9m62f.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Bui DN. Scheduling and Optimizing Stream Programs on Multicore Machines by Exploiting High-Level Abstractions. [Thesis]. University of California – Berkeley; 2013. Available from: http://www.escholarship.org/uc/item/70x9m62f
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of the Western Cape
17.
Elmubarak, Mona.
Accuracy and reliability of traditional measurement techniques for tooth widths and arch perimeter compared to CAD/CAM
.
Degree: 2018, University of the Western Cape
URL: http://hdl.handle.net/11394/6521
► Background: Plaster models form an integral part of the traditional orthodontic records. They are necessary for diagnosis and treatment planning, case presentations as well as…
(more)
▼ Background: Plaster models form an integral part of the traditional orthodontic
records. They are necessary for diagnosis and treatment planning, case presentations
as well as for the evaluation of treatment progress. The accuracy of the measurements
taken for space assessment is crucial prior to treatment planning. The introduction of
digital models overcomes some problems experienced with plaster models. Digital
models have shown to be an acceptable alternative for plaster models.
Aim: The aim of the study was to determine the accuracy of traditional measurement
techniques when compared to the CAD/ CAM measurements in the assessment of
tooth widths and arch perimeter from plaster models.
Method: The mesio-distal tooth widths and arch perimeter of thirty archived plaster
models were measured using a digital caliper to the nearest 0.01 mm and divider to
the nearest 0.1 mm. Corresponding digital models were produced by scanning them
with a CAD/CAM (InEos X5) and space analysis completed by measurements using
InEos Blue software. Measurements were repeated after 1 week from the initial
measurement. The methods were compared using descriptive analysis (mean
difference and standard deviation).
Results: The operator reliability was high for digital models as well as the plaster
models when the measurement tool was the digital caliper (analyzed using the
Pearson correlation coefficient in the paired t-test). The mean values of tooth widths
measurements of CAD/CAM, digital caliper and divider were 6.82 (±0.04), 6.94 (±
0.04) and 7.11 (± 0.04). There was a significant difference between the measurements
made by the CAD/CAM and the divider. Additionally significant differences between
the measurements by digital caliper and divider measurements (p < 0.05) were
observed. No significant difference was found when comparing CAD/CAM to digital
caliper. Positive correlation was displayed between CAD/CAM, digital caliper and the
divider, but the measurements completed with the digital caliper had the highest
correlation with the CAD/CAM. The difference was not significant between the
aforementioned measurement tools (p > 0.05). Arch perimeter measurements showed
no statistical significant difference between CAD/CAM, digital caliper and divider (p
< 0.05).
Conclusion: Archived plaster models stored as records can be converted to digital
models as it will have the same accuracy of measurements. The value of doing a space
analysis with the CAD/CAM system can be performed with similar reliability on the
digital models as a caliper on plaster models.
Advisors/Committee Members: Hudson, Athol (advisor), Mulder, Riaan (advisor).
Subjects/Keywords: Paediatric dentistry;
Plaster model;
Digital model;
Computer-aided design;
Computer-aided modeling
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Elmubarak, M. (2018). Accuracy and reliability of traditional measurement techniques for tooth widths and arch perimeter compared to CAD/CAM
. (Thesis). University of the Western Cape. Retrieved from http://hdl.handle.net/11394/6521
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Elmubarak, Mona. “Accuracy and reliability of traditional measurement techniques for tooth widths and arch perimeter compared to CAD/CAM
.” 2018. Thesis, University of the Western Cape. Accessed March 04, 2021.
http://hdl.handle.net/11394/6521.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Elmubarak, Mona. “Accuracy and reliability of traditional measurement techniques for tooth widths and arch perimeter compared to CAD/CAM
.” 2018. Web. 04 Mar 2021.
Vancouver:
Elmubarak M. Accuracy and reliability of traditional measurement techniques for tooth widths and arch perimeter compared to CAD/CAM
. [Internet] [Thesis]. University of the Western Cape; 2018. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/11394/6521.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Elmubarak M. Accuracy and reliability of traditional measurement techniques for tooth widths and arch perimeter compared to CAD/CAM
. [Thesis]. University of the Western Cape; 2018. Available from: http://hdl.handle.net/11394/6521
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of California – Irvine
18.
Ping, Wei.
Learning and Inference in Latent Variable Graphical Models.
Degree: Computer Science, 2016, University of California – Irvine
URL: http://www.escholarship.org/uc/item/7q90z4b5
► Probabilistic graphical models such as Markov random fields provide a powerful framework and tools for machine learning, especially for structured output learning. Latent variables naturally…
(more)
▼ Probabilistic graphical models such as Markov random fields provide a powerful framework and tools for machine learning, especially for structured output learning. Latent variables naturally exist in many applications of these models; they may arise from partially labeled data, or be introduced to enrich model flexibility. However, the presence of latent variables presents challenges for learning and inference.For example, the standard approach of using maximum a posteriori (MAP) prediction is complicated by the fact that, in latent variable models (LVMs), we typically want to first marginalize out the latent variables, leading to an inference task called marginal MAP. Unfortunately, marginal MAP prediction can be NP-hard even on relatively simple models such as trees, and few methods have been developed in the literature. This thesis presents a class of variational bounds for marginal MAP that generalizes the popular dual-decomposition method for MAP inference, and enables an efficient block coordinate descent algorithm to solve the corresponding optimization. Similarly, when learning LVMs for structured prediction, it is critically important to maintain the effect of uncertainty over latent variables by marginalization. We propose the marginal structured SVM, which uses marginal MAP inference to properly handle that uncertainty inside the framework of max-margin learning.We then turn our attention to an important subclass of latent variable models, restricted Boltzmann machines (RBMs). RBMs are two-layer latent variable models that are widely used to capture complex distributions of observed data, including as building block for deep probabilistic models. One practical problem in RBMs is model selection: we need to determine the hidden (latent) layer size before performing learning. We propose an infinite RBM model and apply the Frank-Wolfe algorithm to solve the resulting learning problem. The resulting algorithm can be interpreted as inserting a hidden variable into a RBM model at each iteration, to easily and efficiently perform model selection during learning. We also study the role of approximate inference in RBMs and conditional RBMs. In particular, there is a common assumption that belief propagation methods do not work well on RBM-based models, especially for learning. In contrast, we demonstrate that for conditional models and structured prediction, learning RBM-based models with belief propagation and its variants can provide much better results than the state-of-the-art contrastive divergence methods.
Subjects/Keywords: Computer science; Dual-decomposition; Graphical Model; Latent Variable Model; Structured SVM
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ping, W. (2016). Learning and Inference in Latent Variable Graphical Models. (Thesis). University of California – Irvine. Retrieved from http://www.escholarship.org/uc/item/7q90z4b5
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ping, Wei. “Learning and Inference in Latent Variable Graphical Models.” 2016. Thesis, University of California – Irvine. Accessed March 04, 2021.
http://www.escholarship.org/uc/item/7q90z4b5.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ping, Wei. “Learning and Inference in Latent Variable Graphical Models.” 2016. Web. 04 Mar 2021.
Vancouver:
Ping W. Learning and Inference in Latent Variable Graphical Models. [Internet] [Thesis]. University of California – Irvine; 2016. [cited 2021 Mar 04].
Available from: http://www.escholarship.org/uc/item/7q90z4b5.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ping W. Learning and Inference in Latent Variable Graphical Models. [Thesis]. University of California – Irvine; 2016. Available from: http://www.escholarship.org/uc/item/7q90z4b5
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of California – Berkeley
19.
Sturton, Cynthia.
Secure Virtualization with Formal Methods.
Degree: Computer Science, 2013, University of California – Berkeley
URL: http://www.escholarship.org/uc/item/59q2n05r
► Virtualization software is increasingly a part of the infrastructure behind our online activities. Companies and institutions that produce online content are taking advantage of the…
(more)
▼ Virtualization software is increasingly a part of the infrastructure behind our online activities. Companies and institutions that produce online content are taking advantage of the "infrastructure as a service" cloud computing model to obtain cheap and reliable computing power. Cloud providers are able to provide this service by letting multiple client operating systems share a single physical machine, and they use virtualization technology to do that. The virtualization layer also provides isolation between guests, protecting each from unwanted access by the co-tenants. Beyond cloud computing, virtualization software has a variety of security-critical applications, including intrusion detection systems, malware analysis, and providing a secure execution environment in end-users' personal machines.In this work, we investigate the verification of isolation properties for virtualization software. Large data structures, such as page tables and caches, are often used to keep track of emulated state and are central to providing correct isolation. We identify these large data structures as one of the biggest challenges in applying traditional formal methods to the verification of isolation properties in virtualization software.We present a new semi-automatic procedure, S2W, to tackle this challenge. Our approach uses a combination of abstraction and bounded model checking and allows for the verification of safety properties of large or unbounded arrays. The key new ideas are a set of heuristics for creating an abstract model and computing a bound on the reachability diameter of its state space. We evaluate this methodology using six case studies, including verification of the address translation logic in the Bochs x86 emulator, and verification of security properties of several hypervisor models. In all of our case studies, we show that our heuristics are effective: we are able to prove the safety property of interest in a reasonable amount of time (the longest verification takes 70 minutes to complete), and our abstraction-based model checking returns no spurious counter-examples.One weakness of using model checking is that the verification result is only as good as the model; if the model does not accurately represent the system under consideration, properties proven true of the model may or may not be true of the system. We present a theoretical framework for describing how to validate a model against the corresponding source code, and an implementation of the framework using symbolic execution and satisfiability modulo theories (SMT) solving. We evaluate our procedure on a number of case studies, including the Bochs address translation logic, a component of the Berkeley Packet Filter, the TCAS suite, the FTP server from GNU Inetutils, and a component of the XMHF hypervisor. Our results show that even for small, well understood code bases, a hand-written model is likely to have errors. For example, in the model for the Bochs address translation logic - a small model of only 300 lines of code that was vigorously used…
Subjects/Keywords: Computer science; Emulators; Hypervisors; Model checking; Model validation; Security
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sturton, C. (2013). Secure Virtualization with Formal Methods. (Thesis). University of California – Berkeley. Retrieved from http://www.escholarship.org/uc/item/59q2n05r
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Sturton, Cynthia. “Secure Virtualization with Formal Methods.” 2013. Thesis, University of California – Berkeley. Accessed March 04, 2021.
http://www.escholarship.org/uc/item/59q2n05r.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Sturton, Cynthia. “Secure Virtualization with Formal Methods.” 2013. Web. 04 Mar 2021.
Vancouver:
Sturton C. Secure Virtualization with Formal Methods. [Internet] [Thesis]. University of California – Berkeley; 2013. [cited 2021 Mar 04].
Available from: http://www.escholarship.org/uc/item/59q2n05r.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Sturton C. Secure Virtualization with Formal Methods. [Thesis]. University of California – Berkeley; 2013. Available from: http://www.escholarship.org/uc/item/59q2n05r
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

UCLA
20.
Zhu, Yaxuan.
A 2D example study of Deep FRAME model.
Degree: Computer Science, 2019, UCLA
URL: http://www.escholarship.org/uc/item/4rw3n37w
► In this work, we use a 2D to study many aspects of Deep FRAME model. We first do visualization on the training process, showing how…
(more)
▼ In this work, we use a 2D to study many aspects of Deep FRAME model. We first do visualization on the training process, showing how the fitted probability distribution evolves during training, how the model captures different modes and how these results are influenced by the choice of prior distributions and activation functions. We then study the activation pattern of the learned network and the corresponding partition of the input space. Based on the input space partition and statistics matching, we then compare the multi-layer model trained with SGD with the original one-layer model. Finally, we analyze the case in which we train the model with finite-MCMC sampling, showing the difference between tting the energy function and synthesizing samples.
Subjects/Keywords: Computer science; Statistics; Deep Frame Model; Energy-based Model
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhu, Y. (2019). A 2D example study of Deep FRAME model. (Thesis). UCLA. Retrieved from http://www.escholarship.org/uc/item/4rw3n37w
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Zhu, Yaxuan. “A 2D example study of Deep FRAME model.” 2019. Thesis, UCLA. Accessed March 04, 2021.
http://www.escholarship.org/uc/item/4rw3n37w.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Zhu, Yaxuan. “A 2D example study of Deep FRAME model.” 2019. Web. 04 Mar 2021.
Vancouver:
Zhu Y. A 2D example study of Deep FRAME model. [Internet] [Thesis]. UCLA; 2019. [cited 2021 Mar 04].
Available from: http://www.escholarship.org/uc/item/4rw3n37w.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Zhu Y. A 2D example study of Deep FRAME model. [Thesis]. UCLA; 2019. Available from: http://www.escholarship.org/uc/item/4rw3n37w
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

UCLA
21.
Han, Tian.
Unsupervised Learning and Understanding of Deep Generative Models.
Degree: Statistics, 2019, UCLA
URL: http://www.escholarship.org/uc/item/1tx2496r
► Probabilistic generative models, especially ones that are parametrized by convolutional neural network (ConvNet), are compact representation tools towards knowledge understanding and can be crucial in…
(more)
▼ Probabilistic generative models, especially ones that are parametrized by convolutional neural network (ConvNet), are compact representation tools towards knowledge understanding and can be crucial in statistics as well as artificial intelligence. The generator model and the energy-based model are two notable examples. Yet the learning and understanding of such models can be challenging because of the high dimensionality of the input and the high non-linearity of the network. In this dissertation, we pay particular attention to the generator model, and study its learning algorithm and the behavior of the learned model. We also develop the joint learning scheme for both the generator model and the energy-based model. To learn the generator model, we view it in the lens of non-linear generalization of factor analysis and propose an alternating back-propagation algorithm for learning. The alternating back-propagation algorithm iterates the following two steps: (1) Inferential back-propagation, which infers the latent factors by Langevin dynamics or gradient descent. (2) Learning back-propagation, which updates the parameters given the inferred latent factors by gradient descent. The gradient computations in both steps are powered by back-propagation, and they share most of their code in common. We show that the alternating back-propagation algorithm can learn realistic generator models of natural images, video sequences, and sounds. Moreover, it can also be used to learn from incomplete or indirect training data. The generator model can be naturally extended for multi-view representation learning where we build separate generator model for each domain but share their latent variables. The proposed multi-view generator model can be easily learned through alternating back-propagation. Our experiments show that the proposed method is powerful in both generation, prediction and recognition. Specifically, we demonstrate our model can accurately rotate and complete faces as well as predict missing modalities. We also show our model can achieve state-of-art or competitive recognition performance through quantitative comparisons.Further, the generator model can be jointly learned with the energy-based model. We propose the probabilistic framework, called divergence triangle, as a compact and symmetric (anti-symmetric) objective function that seamlessly integrates variational learning, adversarial learning, wake-sleep algorithm, and contrastive divergence. This unification makes the processes of sampling, inference, energy evaluation readily available without the need for costly Markov chain Monte Carlo methods. Our experiments demonstrate that the divergence triangle is capable of learning (1) an energy-based model with well-formed energy landscape, (2) direct sampling in the form of a generator model, and (3) feed-forward inference that faithfully reconstructs observed as well as synthesized data. The divergence triangle is also a robust training method that can learn from incomplete data. The last but not the least, we take the…
Subjects/Keywords: Statistics; computer vision; energy based model; generative model; sampling; unsupervised learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Han, T. (2019). Unsupervised Learning and Understanding of Deep Generative Models. (Thesis). UCLA. Retrieved from http://www.escholarship.org/uc/item/1tx2496r
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Han, Tian. “Unsupervised Learning and Understanding of Deep Generative Models.” 2019. Thesis, UCLA. Accessed March 04, 2021.
http://www.escholarship.org/uc/item/1tx2496r.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Han, Tian. “Unsupervised Learning and Understanding of Deep Generative Models.” 2019. Web. 04 Mar 2021.
Vancouver:
Han T. Unsupervised Learning and Understanding of Deep Generative Models. [Internet] [Thesis]. UCLA; 2019. [cited 2021 Mar 04].
Available from: http://www.escholarship.org/uc/item/1tx2496r.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Han T. Unsupervised Learning and Understanding of Deep Generative Models. [Thesis]. UCLA; 2019. Available from: http://www.escholarship.org/uc/item/1tx2496r
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Brunel University
22.
De Folter, Jozefus Johannes Martinus.
Advanced modelling and visualisation of liquid-liquid separations of complex sample components, with variable phase distribution and mode of operation.
Degree: PhD, 2013, Brunel University
URL: http://bura.brunel.ac.uk/handle/2438/7157
;
http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.564033
► This research is about liquid-liquid chromatography modelling. While the main focus was on liquid-liquid chromatography, where the stationary and mobile phases are both liquid, theory…
(more)
▼ This research is about liquid-liquid chromatography modelling. While the main focus was on liquid-liquid chromatography, where the stationary and mobile phases are both liquid, theory of different types of chromatography, including the currently most used techniques, were considered as well. The main goal of this research was to develop a versatile liquid-liquid separation model, able to model all potential operating scenarios and modes of operation. A second goal was to create effective and usable interfaces to such a model, implying primarily information visualisation, and secondarily educative visualisation. The first model developed was a model based on Counter-Current Distribution. Next a new more elemental model was developed, the probabilistic model, which better models continuous liquid-liquid chromatography techniques. Finally, a more traditional model was developed using transport theory. These models were used and compared to experimental data taken from literature. The models were demonstrated to model all main liquid-liquid chromatography techniques, incorporated the different modes of operation, and were able to accurately model many sample components and complex sample injections. A model interface was developed, permitting functional and effective model configuration, exploration and analysis using visualisation and interactivity. Different versions of the interface were then evaluated using questionnaires, group interviews and Insight Evaluation. The visualisation and interactivity enhancements have proven to contribute understanding and insight of the underlying chromatography process. This also proved the value of the Insight Evaluation method, providing valuable qualitative evaluation results desired for this model interface evaluation. A prototype of a new graphical user interface developed, and showed great potential for combining model parameter input and exploring the liquid-liquid chromatography processes. Additionally, a new visualisation method was developed that can accurately visualise different modes of operation. This was used to create animations, which were also evaluated. The results of this evaluation show the new visualisation helps understanding of the liquid-liquid chromatography process amongst CCC novices. The model software will be a valuable tool for industry for predicting, evaluating and validating experimental separations and production processes. While effective models already existed, the use of interactive visualisation permits users to explore the relationship between parameters and performances in a simpler yet more powerful way. It will also be a valuable tool for academia for teaching & training, both staff and students, on how to use the technology. Prior to this work no such tool existed or existing tools were limited in their accessibility and educational value.
Subjects/Keywords: 660.6; Computer model; Simulation; Probabilistic model; Visualisation & interactivity; ProMISE
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
De Folter, J. J. M. (2013). Advanced modelling and visualisation of liquid-liquid separations of complex sample components, with variable phase distribution and mode of operation. (Doctoral Dissertation). Brunel University. Retrieved from http://bura.brunel.ac.uk/handle/2438/7157 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.564033
Chicago Manual of Style (16th Edition):
De Folter, Jozefus Johannes Martinus. “Advanced modelling and visualisation of liquid-liquid separations of complex sample components, with variable phase distribution and mode of operation.” 2013. Doctoral Dissertation, Brunel University. Accessed March 04, 2021.
http://bura.brunel.ac.uk/handle/2438/7157 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.564033.
MLA Handbook (7th Edition):
De Folter, Jozefus Johannes Martinus. “Advanced modelling and visualisation of liquid-liquid separations of complex sample components, with variable phase distribution and mode of operation.” 2013. Web. 04 Mar 2021.
Vancouver:
De Folter JJM. Advanced modelling and visualisation of liquid-liquid separations of complex sample components, with variable phase distribution and mode of operation. [Internet] [Doctoral dissertation]. Brunel University; 2013. [cited 2021 Mar 04].
Available from: http://bura.brunel.ac.uk/handle/2438/7157 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.564033.
Council of Science Editors:
De Folter JJM. Advanced modelling and visualisation of liquid-liquid separations of complex sample components, with variable phase distribution and mode of operation. [Doctoral Dissertation]. Brunel University; 2013. Available from: http://bura.brunel.ac.uk/handle/2438/7157 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.564033

Arizona State University
23.
Singla, Gaurav Rattan.
Predictive Dynamic Thermal and Power Management for
Heterogeneous Mobile Platforms.
Degree: Electrical Engineering, 2015, Arizona State University
URL: http://repository.asu.edu/items/29665
► Heterogeneous multiprocessor systems-on-chip (MPSoCs) powering mobile platforms integrate multiple asymmetric CPU cores, a GPU, and many specialized processors. When the MPSoC operates close to its…
(more)
▼ Heterogeneous multiprocessor systems-on-chip (MPSoCs)
powering mobile platforms integrate multiple asymmetric CPU cores,
a GPU, and many specialized processors. When the MPSoC operates
close to its peak performance, power dissipation easily increases
the temperature, hence adversely impacts reliability. Since using a
fan is not a viable solution for hand-held devices, there is a
strong need for dynamic thermal and power management (DTPM)
algorithms that can regulate temperature with minimal performance
impact. This abstract presents a DTPM algorithm based on a
practical temperature prediction methodology using system
identification. The DTPM algorithm dynamically computes a power
budget using the predicted temperature, and controls the types and
number of active processors as well as their frequencies.
Experiments on an octa-core big.LITTLE processor and common Android
apps demonstrate that the proposed technique predicts temperature
within 3% accuracy, while the DTPM algorithm provides around 6x
reduction in temperature variance, and as large as 16% reduction in
total platform power compared to using a fan.
Subjects/Keywords: Electrical engineering; Computer engineering; Heterogeneous; Mobile platform; Power model; Thermal model
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Singla, G. R. (2015). Predictive Dynamic Thermal and Power Management for
Heterogeneous Mobile Platforms. (Masters Thesis). Arizona State University. Retrieved from http://repository.asu.edu/items/29665
Chicago Manual of Style (16th Edition):
Singla, Gaurav Rattan. “Predictive Dynamic Thermal and Power Management for
Heterogeneous Mobile Platforms.” 2015. Masters Thesis, Arizona State University. Accessed March 04, 2021.
http://repository.asu.edu/items/29665.
MLA Handbook (7th Edition):
Singla, Gaurav Rattan. “Predictive Dynamic Thermal and Power Management for
Heterogeneous Mobile Platforms.” 2015. Web. 04 Mar 2021.
Vancouver:
Singla GR. Predictive Dynamic Thermal and Power Management for
Heterogeneous Mobile Platforms. [Internet] [Masters thesis]. Arizona State University; 2015. [cited 2021 Mar 04].
Available from: http://repository.asu.edu/items/29665.
Council of Science Editors:
Singla GR. Predictive Dynamic Thermal and Power Management for
Heterogeneous Mobile Platforms. [Masters Thesis]. Arizona State University; 2015. Available from: http://repository.asu.edu/items/29665

University of Waterloo
24.
Yim, Daniel.
CLBlood: A Cell-Based Light Interaction Model for Human Blood.
Degree: 2012, University of Waterloo
URL: http://hdl.handle.net/10012/6475
► The development of predictive appearance models for organic tissues is a challenging task due to the inherent complexity of these materials. In this thesis, we…
(more)
▼ The development of predictive appearance models for organic tissues is a challenging task due to the inherent complexity of these materials. In this thesis, we closely examine the biophysical processes responsible for the appearance attributes of whole blood, one the most fundamental of these materials. We describe a new appearance model that simulates the mechanisms of light propagation and absorption within the cellular and fluid portions of this specialized tissue. The proposed model employs a comprehensive, and yet flexible first principles approach based on the morphological, optical and biochemical properties of blood cells. This approach allows for environment driven changes in the cells' anatomy and orientation to be appropriately included into the light transport simulations. The correctness and predictive capabilities of the proposed model are quantitatively and qualitatively evaluated through comparisons of modeled results with actual measured data and experimental observations reported in the scientific literature. Its incorporation into rendering systems is illustrated through images of blood samples depicting appearance variations controlled by physiologically meaningful parameters. Besides the contributions to the modeling of material appearance, the research presented in this thesis is also expected to have applications in a wide range of biomedical areas, from optical diagnostics to the visualization and noninvasive imaging of blood-perfused tissues.
Subjects/Keywords: Computer Graphics; Appearance Model; Predictive Model; Blood; Monte Carlo; Light Transport
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yim, D. (2012). CLBlood: A Cell-Based Light Interaction Model for Human Blood. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/6475
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Yim, Daniel. “CLBlood: A Cell-Based Light Interaction Model for Human Blood.” 2012. Thesis, University of Waterloo. Accessed March 04, 2021.
http://hdl.handle.net/10012/6475.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Yim, Daniel. “CLBlood: A Cell-Based Light Interaction Model for Human Blood.” 2012. Web. 04 Mar 2021.
Vancouver:
Yim D. CLBlood: A Cell-Based Light Interaction Model for Human Blood. [Internet] [Thesis]. University of Waterloo; 2012. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/10012/6475.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Yim D. CLBlood: A Cell-Based Light Interaction Model for Human Blood. [Thesis]. University of Waterloo; 2012. Available from: http://hdl.handle.net/10012/6475
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Univerzitet u Beogradu
25.
Jovanović, Mlađan, 1981-.
Razvoj kontekstno-osetljivih korisničkih
interfejsa.
Degree: Elektrotehnički fakultet, 2013, Univerzitet u Beogradu
URL: https://fedorabg.bg.ac.rs/fedora/get/o:5474/bdef:Content/get
► Datum odbrane: 20.03.2013.
Dobro dizajniran, intuitivan i privlačan za korišćenje korisnički interfejs predstavlja ključni faktor uspeha računarskih proizvoda i sistema. Radi unapređenja razvoja i upotrebljivosti…
(more)
▼ Datum odbrane: 20.03.2013.
Dobro dizajniran, intuitivan i privlačan za
korišćenje korisnički interfejs predstavlja ključni faktor uspeha
računarskih proizvoda i sistema. Radi unapređenja razvoja i
upotrebljivosti korisničkih interfejsa potrebno je uzeti u obzir
karakteristike korisnika. Ovo zahteva interdisciplinaran pristup i
korišćenje znanja iz različitih oblasti kao što su računarske,
saznajne i biološke nauke. Pored toga, potrebno je uzeti u obzir
karakteristike medija i fizičkog okruženja u kojem se odvija
interakcija čoveka i računara. Razvoj korisničkog interfejsa treba
da uvaži i karakteristike hardverskih uređaja koji se koriste u
komunikaciji sa korisnikom, dostupne softverske resurse, kao i
karakteristike programskih sistema koji treba da koriste korisnički
interfejs. U skladu sa tim, uvodi se pojam kontekstno-osetljivog
interfejsa koji se definiše kao korisnički interfejs koji je
prilagodljiv kontekstu interakcije sa konkretnim korisnikom.
Kontekst interakcije čine tri klase entiteta: korisnik računarskog
sistema (čovek); hardverska i softverska platforma pomoću kojih
korisnici interaguju sa sistemom i fizičko okruženje u kojem se
odigrava interakcija sa sistemom. Posmatrajući evoluciju razvoja
softvera uočavamo povećanje nivoa apstrakcije na kojem se softver
opisuje. Dostignuti nivo razvoja omogućava platformski nezavisnu
specifikaciju softvera koja se postepeno ili automatizovano prevodi
u izvršne aplikacije za različite softverske i hardverske
platforme. Arhitektura upravljana modelima, koja se koristi za
razvoj složenih programskih rešenja, hijerarhijski organizuje
koncepte i modele u više nivoa apstrakcije. Ovo je posebno bitno
imajući u vidu da je razvoj kontekstno-osetljivih korisničkih
interfejsa složen proces koji uključuje modelovanje velikog broja
elemenata na različitim nivoima apstrakcije. U ovoj tezi smo
istraživali problem unapređenja razvoja kontekstno-osetljivih
korisničkih interfejsa. Predloženo je rešenje koje omogućava
automatizaciju razvoja korisničkog interfejsa prilagođenog
kontekstu interakcije čoveka i računara. Rešenje se ogleda u
proširenju jezika za modelovanje, standardnog procesa razvoja
softverskih sistema (Unified proces) i razvojnih alata elementima
specifičnim za interakciju čoveka i računara. U skladu sa
prethodnim, razvijen je model kontekstno-osetljive interakcije
čoveka i računara i predloženi su modeli korisničkih interfejsa na
različitim nivoima apstrakcije. Zbog standardizacije, široke
prihvaćenosti, i dostupnosti razvojnih alata, odlučili smo se za
proširenje UML (Unified Modeling Language) jezika za modelovanje i
ATL (Atlas Transformation Language) jezika za transformacije
modela. Primena predloženog pristupa je demonstrirana na primerima
dve studije slučaja iz različitih domena...
Advisors/Committee Members: Jovanović, Zoran, 1953-.
Subjects/Keywords: human-computer interaction; context-sensitive user
interface; model-driven engineering; model-driven architecture;
human model; device model; environment model; multimodal
interaction; user interface model; model
transformations
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jovanović, Mlađan, 1. (2013). Razvoj kontekstno-osetljivih korisničkih
interfejsa. (Thesis). Univerzitet u Beogradu. Retrieved from https://fedorabg.bg.ac.rs/fedora/get/o:5474/bdef:Content/get
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Jovanović, Mlađan, 1981-. “Razvoj kontekstno-osetljivih korisničkih
interfejsa.” 2013. Thesis, Univerzitet u Beogradu. Accessed March 04, 2021.
https://fedorabg.bg.ac.rs/fedora/get/o:5474/bdef:Content/get.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Jovanović, Mlađan, 1981-. “Razvoj kontekstno-osetljivih korisničkih
interfejsa.” 2013. Web. 04 Mar 2021.
Vancouver:
Jovanović, Mlađan 1. Razvoj kontekstno-osetljivih korisničkih
interfejsa. [Internet] [Thesis]. Univerzitet u Beogradu; 2013. [cited 2021 Mar 04].
Available from: https://fedorabg.bg.ac.rs/fedora/get/o:5474/bdef:Content/get.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Jovanović, Mlađan 1. Razvoj kontekstno-osetljivih korisničkih
interfejsa. [Thesis]. Univerzitet u Beogradu; 2013. Available from: https://fedorabg.bg.ac.rs/fedora/get/o:5474/bdef:Content/get
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Iowa State University
26.
Blythe, Derek.
An investigation of viral fitness using statistical and computer models of Equine Infectious Anemia Virus infection.
Degree: 2014, Iowa State University
URL: https://lib.dr.iastate.edu/etd/13787
► Simulation or statistically based models are often used to explore the outcomes and dynamics of physical systems or scientific experiments. In this work, we consider…
(more)
▼ Simulation or statistically based models are often used to explore the outcomes and dynamics of physical systems or scientific experiments. In this work, we consider the use of a mixed effects differential equations model and the use of a stochastic agent based model to model data from competition infection experiments of Equine Infectious Anemia Virus (EIAV). EIAV is a retrovirus that presents with a lifelong persistent infection. Vaccine development for this and other retroviruses has been impeded due to the genetic variation that the virus exhibits in the presence of host immune pressure. To assess if genetic variation has an impact on replicative capacity, variants of EIAV that differ phenotypically were competed in dual infection assays. Data from these experiments were used to develop models that are aimed at being able to detect if there are differences in replicative capacity among the variants.
We first consider a mixed effects model of data from an in vivo competition assay. Parameters of the model are estimated through the use of Markov Chain Monte Carlo (MCMC) methods. In vitro competition experiments were also conducted. These experiments offer more controlled experimental conditions than the in vivo assays. We then propose an agent based computer model that is able to simulate cell free and cell associated virus spread to model the data from the in vitro competition assays. To estimate the parameters of the agent based model, a surrogate Gaussian process model is used. Finally, we propose an extension of the Gaussian process model to account for the additional variance present in stochastic computer models.
Subjects/Keywords: Agent Based Model; Cell Associated Spread; Computer Model; Computer Model Emulation; Equine Infectious Anemia Virus; Gaussian Stochastic Process; Statistics and Probability
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Blythe, D. (2014). An investigation of viral fitness using statistical and computer models of Equine Infectious Anemia Virus infection. (Thesis). Iowa State University. Retrieved from https://lib.dr.iastate.edu/etd/13787
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Blythe, Derek. “An investigation of viral fitness using statistical and computer models of Equine Infectious Anemia Virus infection.” 2014. Thesis, Iowa State University. Accessed March 04, 2021.
https://lib.dr.iastate.edu/etd/13787.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Blythe, Derek. “An investigation of viral fitness using statistical and computer models of Equine Infectious Anemia Virus infection.” 2014. Web. 04 Mar 2021.
Vancouver:
Blythe D. An investigation of viral fitness using statistical and computer models of Equine Infectious Anemia Virus infection. [Internet] [Thesis]. Iowa State University; 2014. [cited 2021 Mar 04].
Available from: https://lib.dr.iastate.edu/etd/13787.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Blythe D. An investigation of viral fitness using statistical and computer models of Equine Infectious Anemia Virus infection. [Thesis]. Iowa State University; 2014. Available from: https://lib.dr.iastate.edu/etd/13787
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
27.
Singh, Saurabh.
Model of division of labor in artificial society with
continuous demand and in industrial cluster with positive social
influence.
Degree: MS, Computer
Science, 2013, National Library of Canada
URL: http://scholar.uwindsor.ca/etd/5004
► Two models of division of labor or specialization, in two different systems are proposed in the thesis. The domain of the first one is…
(more)
▼ Two models of division of labor or
specialization, in two different systems are proposed in the
thesis. The domain of the first one is the artificial society where
as the second is concerned with the industrial cluster. There are
several models for the emergence of increase in division of labor
in agent societies. Two such models are the Genetic Threshold
Model
(GTM) and the Social Inhibition
Model (SIM). Combining these two
concepts, we propose a hybrid
model for the emergence of division
of labor as a function of demand varying continuously over a
suitably chosen smooth curve. In the second
model, we introduce a
new concept of positive social response in modeling adaptive
behavior of industry cluster and a new formulation for work load of
an organization for a single task at a time in the
cluster.
Advisors/Committee Members: Ziad Kobti.
Subjects/Keywords: Computer science; division of labor model; Computer Sciences
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Singh, S. (2013). Model of division of labor in artificial society with
continuous demand and in industrial cluster with positive social
influence. (Masters Thesis). National Library of Canada. Retrieved from http://scholar.uwindsor.ca/etd/5004
Chicago Manual of Style (16th Edition):
Singh, Saurabh. “Model of division of labor in artificial society with
continuous demand and in industrial cluster with positive social
influence.” 2013. Masters Thesis, National Library of Canada. Accessed March 04, 2021.
http://scholar.uwindsor.ca/etd/5004.
MLA Handbook (7th Edition):
Singh, Saurabh. “Model of division of labor in artificial society with
continuous demand and in industrial cluster with positive social
influence.” 2013. Web. 04 Mar 2021.
Vancouver:
Singh S. Model of division of labor in artificial society with
continuous demand and in industrial cluster with positive social
influence. [Internet] [Masters thesis]. National Library of Canada; 2013. [cited 2021 Mar 04].
Available from: http://scholar.uwindsor.ca/etd/5004.
Council of Science Editors:
Singh S. Model of division of labor in artificial society with
continuous demand and in industrial cluster with positive social
influence. [Masters Thesis]. National Library of Canada; 2013. Available from: http://scholar.uwindsor.ca/etd/5004

University of Wisconsin – Milwaukee
28.
Benzaid, Zachary Salim.
Analysis of Bas-Relief Generation Techniques.
Degree: MS, Engineering, 2017, University of Wisconsin – Milwaukee
URL: https://dc.uwm.edu/etd/1446
► Simplifying the process of generating relief sculptures has been an interesting topic of research in the past decade. A relief is a type of…
(more)
▼ Simplifying the process of generating relief sculptures has been an interesting topic of research in the past decade. A relief is a type of sculpture that does not entirely extend into three-dimensional space. Instead, it has details that are carved into a flat surface, like wood or stone, such that there are slight elevations from the flat plane that define the
subject of the sculpture. When viewed orthogonally straight on, a relief can look like a full sculpture or statue in the respect that a full sense of depth from the
subject can be perceived. Creating such a
model manually is a tedious and difficult process, akin to the challenges a painter may face when designing a convincing painting.
Like with painting, certain digital tools (3D modeling programs most commonly) can make the process a little easier, but can still take a lot of time to obtain sufficient details. To further simplify the process of relief generation, a sizable amount of research has gone into developing semi-automated processes of creating reliefs based on different types of models. These methods can vary in many ways, including the type of input used, the computational time required, and the quality of the resulting
model. The performance typically depends on the type of operations applied to the input
model, and usually user-specified parameters to modify
its appearance.
In this thesis, we try to accomplish a few related topics. First, we analyze previous work in the field and briefly summarize the procedures to emphasize a variety of ways to solve the problem. We then look at specific algorithms for generating reliefs from 2D and 3D models. After explaining two of each type, a “basic” approach, and a more sophisticated one, we compare the algorithms based on their difficulty to implement, the quality of the results, and the time to process. The final section will include some more sample results of the previous algorithms, and will suggest possible ideas to enhance their results, which could be applied in continuing research on the topic.
Advisors/Committee Members: Zeyun Yu.
Subjects/Keywords: Computer Graphics; Image Processing; Model; Relief; Sculpture; Computer Sciences
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Benzaid, Z. S. (2017). Analysis of Bas-Relief Generation Techniques. (Thesis). University of Wisconsin – Milwaukee. Retrieved from https://dc.uwm.edu/etd/1446
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Benzaid, Zachary Salim. “Analysis of Bas-Relief Generation Techniques.” 2017. Thesis, University of Wisconsin – Milwaukee. Accessed March 04, 2021.
https://dc.uwm.edu/etd/1446.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Benzaid, Zachary Salim. “Analysis of Bas-Relief Generation Techniques.” 2017. Web. 04 Mar 2021.
Vancouver:
Benzaid ZS. Analysis of Bas-Relief Generation Techniques. [Internet] [Thesis]. University of Wisconsin – Milwaukee; 2017. [cited 2021 Mar 04].
Available from: https://dc.uwm.edu/etd/1446.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Benzaid ZS. Analysis of Bas-Relief Generation Techniques. [Thesis]. University of Wisconsin – Milwaukee; 2017. Available from: https://dc.uwm.edu/etd/1446
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Western Ontario
29.
Bou Nassif, Ali.
Software Size and Effort Estimation from Use Case Diagrams Using Regression and Soft Computing Models.
Degree: 2012, University of Western Ontario
URL: https://ir.lib.uwo.ca/etd/547
► In this research, we propose a novel model to predict software size and effort from use case diagrams. The main advantage of our model is…
(more)
▼ In this research, we propose a novel model to predict software size and effort from use case diagrams. The main advantage of our model is that it can be used in the early stages of the software life cycle, and that can help project managers efficiently conduct cost estimation early, thus avoiding project overestimation and late delivery among other benefits. Software size, productivity, complexity and requirements stability are the inputs of the model. The model is composed of six independent sub-models which include non-linear regression, linear regression with a logarithmic transformation, Radial Basis Function Neural Network (RBFNN), Multilayer Perceptron Neural Network (MLP), General Regression Neural Network (GRNN) and a Treeboost model. Several experiments were conducted to train and test the model based on the size of the training and testing data points. The neural network models were evaluated against regression models as well as two other models that conduct software estimation from use case diagrams. Results show that our model outperforms other relevant models based on five evaluation criteria. While the performance of each of the six sub-models varies based on the size of the project dataset used for evaluation, it was concluded that the non-linear regression model outperforms the linear regression model. As well, the GRNN model exceeds other neural network models. Furthermore, experiments demonstrated that the Treeboost model can be efficiently used to predict software effort.
Subjects/Keywords: Software Size and Effort Estimation; Regression Analysis; MLP Model; RBFNN Model; GRNN Model; Treeboost Model; Other Electrical and Computer Engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bou Nassif, A. (2012). Software Size and Effort Estimation from Use Case Diagrams Using Regression and Soft Computing Models. (Thesis). University of Western Ontario. Retrieved from https://ir.lib.uwo.ca/etd/547
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Bou Nassif, Ali. “Software Size and Effort Estimation from Use Case Diagrams Using Regression and Soft Computing Models.” 2012. Thesis, University of Western Ontario. Accessed March 04, 2021.
https://ir.lib.uwo.ca/etd/547.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Bou Nassif, Ali. “Software Size and Effort Estimation from Use Case Diagrams Using Regression and Soft Computing Models.” 2012. Web. 04 Mar 2021.
Vancouver:
Bou Nassif A. Software Size and Effort Estimation from Use Case Diagrams Using Regression and Soft Computing Models. [Internet] [Thesis]. University of Western Ontario; 2012. [cited 2021 Mar 04].
Available from: https://ir.lib.uwo.ca/etd/547.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Bou Nassif A. Software Size and Effort Estimation from Use Case Diagrams Using Regression and Soft Computing Models. [Thesis]. University of Western Ontario; 2012. Available from: https://ir.lib.uwo.ca/etd/547
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Queensland University of Technology
30.
Marrington, Andrew Daniel.
Computer profiling for forensic purposes.
Degree: 2009, Queensland University of Technology
URL: https://eprints.qut.edu.au/31048/
► Computer forensics is the process of gathering and analysing evidence from computer systems to aid in the investigation of a crime. Typically, such investigations are…
(more)
▼ Computer forensics is the process of gathering and analysing evidence from computer systems to aid in the investigation of a crime. Typically, such investigations are undertaken by human forensic examiners using purpose-built software to discover evidence from a computer disk. This process is a manual one, and the time it takes for a forensic examiner to conduct such an investigation is proportional to the storage capacity of the computer's disk drives. The heterogeneity and complexity of various data formats stored on modern computer systems compounds the problems posed by the sheer volume of data. The decision to undertake a computer forensic examination of a computer system is a decision to commit significant quantities of a human examiner's time. Where there is no prior knowledge of the information contained on a computer system, this commitment of time and energy occurs with little idea of the potential benefit to the investigation. The key contribution of this research is the design and development of an automated process to describe a computer system and its activity for the purposes of a computer forensic investigation. The term proposed for this process is computer profiling. A model of a computer system and its activity has been developed over the course of this research. Using this model a computer system, which is the subj ect of investigation, can be automatically described in terms useful to a forensic investigator. The computer profiling process IS resilient to attempts to disguise malicious computer activity. This resilience is achieved by detecting inconsistencies in the information used to infer the apparent activity of the computer. The practicality of the computer profiling process has been demonstrated by a proof-of concept software implementation. The model and the prototype implementation utilising the model were tested with data from real computer systems. The resilience of the process to attempts to disguise malicious activity has also been demonstrated with practical experiments conducted with the same prototype software implementation.
Subjects/Keywords: computer forensics; digital evidence; computer profiling; time-lining; temporal inconsistency; computer forensic object model
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Marrington, A. D. (2009). Computer profiling for forensic purposes. (Thesis). Queensland University of Technology. Retrieved from https://eprints.qut.edu.au/31048/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Marrington, Andrew Daniel. “Computer profiling for forensic purposes.” 2009. Thesis, Queensland University of Technology. Accessed March 04, 2021.
https://eprints.qut.edu.au/31048/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Marrington, Andrew Daniel. “Computer profiling for forensic purposes.” 2009. Web. 04 Mar 2021.
Vancouver:
Marrington AD. Computer profiling for forensic purposes. [Internet] [Thesis]. Queensland University of Technology; 2009. [cited 2021 Mar 04].
Available from: https://eprints.qut.edu.au/31048/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Marrington AD. Computer profiling for forensic purposes. [Thesis]. Queensland University of Technology; 2009. Available from: https://eprints.qut.edu.au/31048/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
◁ [1] [2] [3] [4] [5] … [61] ▶
.