You searched for subject:(Computational cost)
.
Showing records 1 – 30 of
33 total matches.
◁ [1] [2] ▶

University of Toronto
1.
Sundar, Sowndarya.
Communication Augmented Scheduling for Cloud Computing with Delay Constraint and Task Dependency.
Degree: 2016, University of Toronto
URL: http://hdl.handle.net/1807/77620
► Cloud computing can augment the capabilities of resource-poor mobile devices with the help of resourceful servers. It allows reduction of energy consumption and makespan by…
(more)
▼ Cloud computing can augment the capabilities of resource-poor mobile devices with the
help of resourceful servers. It allows reduction of energy consumption and makespan
by offloading computationally-intensive applications to the cloud. We consider a system
consisting of a remote cloud and a network of heterogeneous local processors. We aim to
identify the optimal scheduling decision for a mobile application comprising of dependent
tasks, such that the total cost is minimized subject to an application deadline. We
propose the Communication Augmented Latest Possible Scheduling (CALPS) algorithm
to obtain an approximate solution for this NP-hard problem in polynomial time. We also
identify a lower bound to the optimal solution and propose techniques to utilize this lower
bound solution to obtain improved versions of the CALPS algorithm. Using simulation
results, we compare the proposed solution approaches with each other and with existing
work.
M.A.S.
2017-06-22 00:00:00
Advisors/Committee Members: Liang, Ben, Electrical and Computer Engineering.
Subjects/Keywords: Application Deadline; Cloud Computing; Computational Offloading; Cost Minimization; Dependency; Scheduling; 0464
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sundar, S. (2016). Communication Augmented Scheduling for Cloud Computing with Delay Constraint and Task Dependency. (Masters Thesis). University of Toronto. Retrieved from http://hdl.handle.net/1807/77620
Chicago Manual of Style (16th Edition):
Sundar, Sowndarya. “Communication Augmented Scheduling for Cloud Computing with Delay Constraint and Task Dependency.” 2016. Masters Thesis, University of Toronto. Accessed January 23, 2021.
http://hdl.handle.net/1807/77620.
MLA Handbook (7th Edition):
Sundar, Sowndarya. “Communication Augmented Scheduling for Cloud Computing with Delay Constraint and Task Dependency.” 2016. Web. 23 Jan 2021.
Vancouver:
Sundar S. Communication Augmented Scheduling for Cloud Computing with Delay Constraint and Task Dependency. [Internet] [Masters thesis]. University of Toronto; 2016. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/1807/77620.
Council of Science Editors:
Sundar S. Communication Augmented Scheduling for Cloud Computing with Delay Constraint and Task Dependency. [Masters Thesis]. University of Toronto; 2016. Available from: http://hdl.handle.net/1807/77620

University of the Western Cape
2.
Kazeem, Funmilayo Eniola.
Multilevel Monte Carlo simulation in options pricing
.
Degree: 2014, University of the Western Cape
URL: http://hdl.handle.net/11394/4349
► In Monte Carlo path simulations, which are used extensively in computational -finance, one is interested in the expected value of a quantity which is a…
(more)
▼ In Monte Carlo path simulations, which are used extensively in
computational -finance, one is interested in the expected value of a quantity which is a functional of the solution to a stochastic differential equation [M.B. Giles, Multilevel Monte
Carlo Path Simulation: Operations Research, 56(3) (2008) 607-617] where we have a scalar function with a uniform Lipschitz bound. Normally, we discretise the stochastic differential equation numerically. The simplest estimate for this expected value is the mean of the payoff (the value of an option at the terminal period) values from N independent path simulations. The multilevel Monte Carlo path simulation method recently introduced by Giles exploits strong convergence properties to improve the
computational complexity by combining simulations with different levels of resolution. This new method improves on the
computational complexity of the standard Monte Carlo approach by considering Monte Carlo simulations with a geometric sequence of different time steps following the approach of Kebaier [A. Kebaier, Statistical Romberg extrapolation: A new variance reduction method and applications to options pricing. Annals of Applied Probability 14(4) (2005) 2681- 2705]. The multilevel method makes computation easy as it estimates each of the terms of the estimate independently (as opposed to the Monte Carlo method) such that the
computational complexity of Monte Carlo path simulations is minimised. In this thesis, we investigate this method in pricing path-dependent options and the computation of option price sensitivities also known as Greeks.
Advisors/Committee Members: Patidar, Kailash C (advisor), Ghomrasni, Raouf (advisor).
Subjects/Keywords: Monte Carlo simulation;
Computational cost;
Risk-neutral measure
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kazeem, F. E. (2014). Multilevel Monte Carlo simulation in options pricing
. (Thesis). University of the Western Cape. Retrieved from http://hdl.handle.net/11394/4349
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Kazeem, Funmilayo Eniola. “Multilevel Monte Carlo simulation in options pricing
.” 2014. Thesis, University of the Western Cape. Accessed January 23, 2021.
http://hdl.handle.net/11394/4349.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Kazeem, Funmilayo Eniola. “Multilevel Monte Carlo simulation in options pricing
.” 2014. Web. 23 Jan 2021.
Vancouver:
Kazeem FE. Multilevel Monte Carlo simulation in options pricing
. [Internet] [Thesis]. University of the Western Cape; 2014. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/11394/4349.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Kazeem FE. Multilevel Monte Carlo simulation in options pricing
. [Thesis]. University of the Western Cape; 2014. Available from: http://hdl.handle.net/11394/4349
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
3.
Kulesza, Joel.
Cost-optimized Automated Variance Reduction for Highly Angle-dependent Radiation Transport Analyses.
Degree: PhD, Nuclear Engineering & Radiological Sciences, 2018, University of Michigan
URL: http://hdl.handle.net/2027.42/147541
► Monte Carlo variance-reduction techniques that directly bias particle direction have historically received limited attention despite being useful for many radiation transport applications such as those…
(more)
▼ Monte Carlo variance-reduction techniques that directly bias particle direction have historically received limited attention despite being useful for many radiation transport applications such as those in which radiation travels through large regions with a low probability of colliding. One such technique is known as DXTRAN (short for deterministic transport) in the MCNP Monte Carlo radiation transport code. Until now, effectively applying DXTRAN in calculations is largely based on empirical observations of
computational performance. Optimal DXTRAN parameters are identified through manual iteration. This work develops new mathematical descriptions of the DXTRAN variance-reduction process and demonstrates a new automated variance-reduction method that applies these mathematical formulations to determine the optimal application of DXTRAN in a given problem.
Specifically, this work includes the first known deduction and application of the integral transport kernels for both biasing with DXTRAN particle production and the associated free-flight transport with truncation of the initiating particle. This work applies these new DXTRAN transport kernels to derive expressions for the mean tally response in Monte Carlo transport calculations involving DXTRAN. These expressions are then used to rigorously prove, for the first time, that the technique is unbiased. This work also derives equations for the variance and associated
computational cost of Monte Carlo calculations involving DXTRAN, which are solved using the deterministic discrete ordinates method.
To verify the derivations developed in this work, fourteen 1-D and seven 2-D test case calculations are made. Within the 2-D test cases, a variety of scenarios are examined that lead to highly angle dependent solutions where other variance-reduction techniques that do not directly bias particle direction are challenged. For the 1-D cases, when DXTRAN is solely used, the Monte Carlo and deterministically calculated mean and variances agree within 1.4%. For the 2-D cases, the agreement is generally well within 10% and never worse than 13%, which is consistent with prior analyses for other variance-reduction techniques.
Of the verification test cases, six 1-D and six 2-D test cases are processed using an automated optimization workflow to determine optimal DXTRAN variance reduction parameters. As long as a non-trivial change in FOM is predicted by the optimizer, the optimizer identifies improved DXTRAN parameters relative to the initial guess in all but one case. For the 2-D test cases, a coarse angular quadrature is used to permit the optimization iterations to run quickly; however, the relative change in
computational cost as a result of varying DXTRAN size, position, and rouletting parameters is adequately captured.
This work provides a method that could augment a strictly variance-reducing hybrid radiation transport method (e.g., FW-CADIS) to improve the efficiency of highly angle-dependent radiation transport analyses.
Advisors/Committee Members: Kiedrowski, Brian (committee member), Solomon Jr., Clell J. (committee member), Ziff, Robert M (committee member), Larsen, Edward W (committee member), Martin, William R (committee member).
Subjects/Keywords: Computational-cost Optimizing Hybrid Radiation Transport Methods; Nuclear Engineering and Radiological Sciences; Engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kulesza, J. (2018). Cost-optimized Automated Variance Reduction for Highly Angle-dependent Radiation Transport Analyses. (Doctoral Dissertation). University of Michigan. Retrieved from http://hdl.handle.net/2027.42/147541
Chicago Manual of Style (16th Edition):
Kulesza, Joel. “Cost-optimized Automated Variance Reduction for Highly Angle-dependent Radiation Transport Analyses.” 2018. Doctoral Dissertation, University of Michigan. Accessed January 23, 2021.
http://hdl.handle.net/2027.42/147541.
MLA Handbook (7th Edition):
Kulesza, Joel. “Cost-optimized Automated Variance Reduction for Highly Angle-dependent Radiation Transport Analyses.” 2018. Web. 23 Jan 2021.
Vancouver:
Kulesza J. Cost-optimized Automated Variance Reduction for Highly Angle-dependent Radiation Transport Analyses. [Internet] [Doctoral dissertation]. University of Michigan; 2018. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/2027.42/147541.
Council of Science Editors:
Kulesza J. Cost-optimized Automated Variance Reduction for Highly Angle-dependent Radiation Transport Analyses. [Doctoral Dissertation]. University of Michigan; 2018. Available from: http://hdl.handle.net/2027.42/147541

University of Georgia
4.
Zhong, Zhenyu.
System oriented techniques for high-performance anti-spam solutions.
Degree: 2014, University of Georgia
URL: http://hdl.handle.net/10724/24538
► Email has become a crucial part of life as the Internet has developed. However, a massive influx of spam emails has threatened the usefulness of…
(more)
▼ Email has become a crucial part of life as the Internet has developed. However, a massive influx of spam emails has threatened the usefulness of email communication. Many techniques have been developed, such as machine learning,
authentication, collaboration, etc. However, little has been done from a systems perspective to provide an effective, robust and efficient anti-spam solution. The arms race between spammers and anti-spam researchers has brought new challenges to the
design of modern anti-spam systems. This dissertation focuses on the systems aspect of the challenges that the anti-spam researchers face in designing various anti-spam approaches. the system aspects. In particular, we attempt to provide solutions to the
challenges in the collaborative approach, stand-alone approach and sender-based approach. These challenges are 1) preserving privacy of email content in collaboration, 2) achieving both high accuracy and high processing speed, and 3) selectively
punishing email senders without exact knowledge of whether the email sender is a spammer or a normal user. We design a novel technique for message transformation to preserve the privacy of email content and derive resemblance information for
collaborative email classification. We also carefully design a communication protocol to ensure email privacy during information exchange among the collaborative entities. The experimental results demonstrate a comparable accuracy and greater robustness
compared to Bayesian and Distributed Checksum Clearinghouse approaches. This dissertation proposes a new metric for privacy evaluation and demonstrates a system with excellent privacy preservation. This dissertation continues to explore the tradeoff
between spam filtering accuracy and speed by using approximate classification. It demonstrates about one order of magnitude of speed improvement over two well-known spam filters, while achieving identical false positive rates and similar false negative
rates. For cost-based approaches, we propose to push the spam filter to the early stage of the SMTP conversation, and determine the cost based on the email quality and spam behavior. The experimental results show that under state-of-the-art hardware, the
proposed technique can effectively limit the ability of the spammer effectively and significantly even if he possesses more CPU resources than the normal sender.
Subjects/Keywords: SPAM; Security and Privacy; Performance Evaluation; Approximation; Bloom Filter; Data Distribution; Computational Cost
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhong, Z. (2014). System oriented techniques for high-performance anti-spam solutions. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/24538
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Zhong, Zhenyu. “System oriented techniques for high-performance anti-spam solutions.” 2014. Thesis, University of Georgia. Accessed January 23, 2021.
http://hdl.handle.net/10724/24538.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Zhong, Zhenyu. “System oriented techniques for high-performance anti-spam solutions.” 2014. Web. 23 Jan 2021.
Vancouver:
Zhong Z. System oriented techniques for high-performance anti-spam solutions. [Internet] [Thesis]. University of Georgia; 2014. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/10724/24538.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Zhong Z. System oriented techniques for high-performance anti-spam solutions. [Thesis]. University of Georgia; 2014. Available from: http://hdl.handle.net/10724/24538
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Australian National University
5.
Kamal, Tahseen.
Compact microscopy systems with non-conventional optical techniques
.
Degree: 2018, Australian National University
URL: http://hdl.handle.net/1885/148711
► This work has been motivated by global efforts to decentralize high performance imaging systems through frugal engineering and expansion of 3D fabrication technologies. Typically, high…
(more)
▼ This work has been motivated by global efforts to decentralize
high performance imaging systems through frugal engineering and
expansion of 3D fabrication technologies. Typically, high
resolution imaging systems are confined in clinical or laboratory
environment due to the limited means of producing optical lenses
on the demand.
The use of lenses is an essential mean to achieve high resolution
imaging, but conventional optical lenses are made using either
polished glass or molded plastics. Both are suited for highly
skilled craftsmen or factory level production. In the first part
of this work, alternative low-cost lens-making process for
generating high quality optical lenses with minimal operator
training have been discussed. We evoked the use of liquid
droplets to make lenses. This unconventional method relies on
interfacial forces to generate curved droplets that if solidified
can become convex-shaped lenses. To achieve this, we studied the
droplet behaviour (Rayleigh-Plateau phenomenon) before creating a
set of 3D printed tools to generate droplets. We measured and
characterized the fabrication techniques to ensure reliability in
lens fabrication on- demand at high throughput. Compact imaging
requires a compact optical system and computing unit. So, in the
next part of this work, we engineered a deconstructed microscope
system for field-portable imaging.
Still a core limitation of all optical lenses is the physical
size of lens aperture – which limits their resolution
performance, and optical aberrations – that limit their imaging
quality performance. In the next part of this work, we
investigated use of computational optics-based optimization
approaches to conduct in situ characterization
of aberrations that can be digitally removed. The computational
approach we have used in this work is known as Fourier
Ptychography (FP). It is an emerging computational microscopic
technique that combines the use of synthetic aperture and
iterative optimization algorithms, offering increased resolution,
at full field-of-view (FOV) and aberration-removal. In using FP
techniques, we have shown measurements of optical distortions
from different lenses made from droplets only. We also,
investigated the limitations of FP in aberration recovery on
moldless lenses.
In conclusion, this work presents new opportunities to engineer
high resolution imaging system using modern 3D printing
approaches. Our successful demonstration of FP techniques on
moldless lenses will usher new additional applications in digital
pathology or low-cost mobile health.
Subjects/Keywords: droplet lens;
lenses;
manufacturing;
mobile health;
computational;
compact,;
portable;
low-cost;
3D printed;
passive droplet
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kamal, T. (2018). Compact microscopy systems with non-conventional optical techniques
. (Thesis). Australian National University. Retrieved from http://hdl.handle.net/1885/148711
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Kamal, Tahseen. “Compact microscopy systems with non-conventional optical techniques
.” 2018. Thesis, Australian National University. Accessed January 23, 2021.
http://hdl.handle.net/1885/148711.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Kamal, Tahseen. “Compact microscopy systems with non-conventional optical techniques
.” 2018. Web. 23 Jan 2021.
Vancouver:
Kamal T. Compact microscopy systems with non-conventional optical techniques
. [Internet] [Thesis]. Australian National University; 2018. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/1885/148711.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Kamal T. Compact microscopy systems with non-conventional optical techniques
. [Thesis]. Australian National University; 2018. Available from: http://hdl.handle.net/1885/148711
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Virginia Tech
6.
Kamal, Tariq.
Computational Cost Analysis of Large-Scale Agent-Based Epidemic Simulations.
Degree: PhD, Computer Science and Applications, 2016, Virginia Tech
URL: http://hdl.handle.net/10919/82507
► Agent-based epidemic simulation (ABES) is a powerful and realistic approach for studying the impacts of disease dynamics and complex interventions on the spread of an…
(more)
▼ Agent-based epidemic simulation (ABES) is a powerful and realistic approach for studying the impacts of disease dynamics and complex interventions on the spread of an infection in the population. Among many ABES systems, EpiSimdemics comes closest to the popular agent-based epidemic simulation systems developed by Eubank, Longini, Ferguson, and Parker. EpiSimdemics is a general framework that can model many reaction-diffusion processes besides the Susceptible-Exposed-Infectious-Recovered (SEIR) models. This model allows the study of complex systems as they interact, thus enabling researchers to model and observe the socio-technical trends and forces. Pandemic planning at the world level requires simulation of over 6 billion agents, where each agent has a unique set of demographics, daily activities, and behaviors. Moreover, the stochastic nature of epidemic models, the uncertainty in the initial conditions, and the variability of reactions require the computation of several replicates of a simulation for a meaningful study. Given the hard timelines to respond, running many replicates (15-25) of several configurations (10-100) (of these compute-heavy simulations) can only be possible on high-performance clusters (HPC). These agent-based epidemic simulations are irregular and show poor execution performance on high-performance clusters due to the evolutionary nature of their workload, large irregular communication and load imbalance.
For increased utilization of HPC clusters, the simulation needs to be scalable. Many challenges arise when improving the performance of agent-based epidemic simulations on high-performance clusters. Firstly, large-scale graph-structured computation is central to the processing of these simulations, where the star-motif quality nodes (natural graphs) create large
computational imbalances and communication hotspots. Secondly, the computation is performed by classes of tasks that are separated by global synchronization. The non-overlapping computations cause idle times, which introduce the load balancing and
cost estimation challenges. Thirdly, the computation is overlapped with communication, which is difficult to measure using simple methods, thus making the
cost estimation very challenging. Finally, the simulations are iterative and the workload (computation and communication) may change through iterations, as a result introducing load imbalances.
This dissertation focuses on developing a
cost estimation model and load balancing schemes to increase the runtime efficiency of agent-based epidemic simulations on high-performance clusters. While developing the
cost model and load balancing schemes, we perform the static and dynamic load analysis of such simulations. We also statically quantified the
computational and communication workloads in EpiSimdemics. We designed, developed and evaluated a
cost model for estimating the execution
cost of large-scale parallel agent-based epidemic simulations (and more generally for all constrained producer-consumer parallel algorithms). This
cost model…
Advisors/Committee Members: Butt, Ali Raza Ashraf (committeechair), Marathe, Madhav Vishnu (committee member), Bisset, Keith R. (committee member), Vullikanti, Anil Kumar S. (committee member), Schulz, Martin (committee member).
Subjects/Keywords: Cost Analysis and Estimation; Parallel Algorithms; Graph Partitioning; Computational Epidemiology; Disease Dynamics; Statistical Analysis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kamal, T. (2016). Computational Cost Analysis of Large-Scale Agent-Based Epidemic Simulations. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/82507
Chicago Manual of Style (16th Edition):
Kamal, Tariq. “Computational Cost Analysis of Large-Scale Agent-Based Epidemic Simulations.” 2016. Doctoral Dissertation, Virginia Tech. Accessed January 23, 2021.
http://hdl.handle.net/10919/82507.
MLA Handbook (7th Edition):
Kamal, Tariq. “Computational Cost Analysis of Large-Scale Agent-Based Epidemic Simulations.” 2016. Web. 23 Jan 2021.
Vancouver:
Kamal T. Computational Cost Analysis of Large-Scale Agent-Based Epidemic Simulations. [Internet] [Doctoral dissertation]. Virginia Tech; 2016. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/10919/82507.
Council of Science Editors:
Kamal T. Computational Cost Analysis of Large-Scale Agent-Based Epidemic Simulations. [Doctoral Dissertation]. Virginia Tech; 2016. Available from: http://hdl.handle.net/10919/82507
7.
Allesson, Sara.
Sheet Metal Forming Simulations with Elastic Dies: Emphasis on Computational Cost.
Degree: 2019, , Department of Mechanical Engineering
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18236
► The car industry produces many of their car parts by using sheet metal forming, where one of the most time-consuming phases is the development…
(more)
▼ The car industry produces many of their car parts by using sheet metal forming, where one of the most time-consuming phases is the development and manufacturing of new forming tools. As of today, when a new tool is to be evaluated in terms of usability, a forming simulation is conducted to predict possible failures before manufacturing. The assumption is then that the tools are rigid, and the only deformable part is the sheet metal itself. This is however not the case, since the tools also deform during the forming process. A previous research, which is the basis of this thesis, included a model with only elastic tools and showed results of high accuracy in comparison to using a rigid setup. However, this simulation is not optimal to implement for a daily based usage, since it requires high computational power and has a long simulation time. The aim and scope for this thesis is to evaluate how a sheet metal forming simulation with elastic tool consideration can be reduced in terms of computational cost, by using the software LS-DYNA. A small deviation of the forming result is acceptable and the aim is to run the simulation with a 50-75 % reduction of time on fewer cores than the approximate 14 hours and 800 CPUs that the simulation requires today. The first step was to alter the geometry of the tools and evaluate the impact on the deformations of the blank. The elastic solid parts that only has small deformations are deleted and replaced by rigid surfaces, making the model partly elastic. Later, different decomposition methods are studied to determine what kind that makes the simulation run faster. At last, a scaling analysis is conducted to determine the range of computational power that is to be used to run the simulations as efficient as possible, and what part of the simulation that is affecting the simulation time the most. The correlation of major strain deviation between a fully elastic model and a partly elastic model showed results of high accuracy, as well as comparison with production measurements of a formed blank. The computational time is reduced by over 90 % when using approximately 65 % of the initial computational power. If the simulations are run with even less number of cores, 10 % of the initial number of CPUs, the simulation time is reduced by over 70 %. The conclusion of this work is that it is possible to run a partly elastic sheet metal forming simulation much more efficient than using a fully elastic model, without reliability problems of the forming results. This by reducing the number of elements, evaluate the decomposition method and by conducting a scaling analysis to evaluate the efficiency of computational power.
Bilindustrin producerar många av sina bildelar genom att tillämpa plåtformning, där en av de mest tidskrävande faserna är utveckling och tillverkning av nya formningsverktyg. Idag, när ett nytt verktyg ska utvärderas med avseende på användbarhet, genomförs en formningssimulering för att förutsäga eventuella fel innan tillverkning.…
Subjects/Keywords: Elastic Sheet Metal Forming Simulation; Computational Cost; Decomposition; LS-DYNA; Elastisk plåtformningssimulering; beräkningskostnad; dekomposition; LS-DYNA; Other Mechanical Engineering; Annan maskinteknik
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Allesson, S. (2019). Sheet Metal Forming Simulations with Elastic Dies: Emphasis on Computational Cost. (Thesis). , Department of Mechanical Engineering. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18236
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Allesson, Sara. “Sheet Metal Forming Simulations with Elastic Dies: Emphasis on Computational Cost.” 2019. Thesis, , Department of Mechanical Engineering. Accessed January 23, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18236.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Allesson, Sara. “Sheet Metal Forming Simulations with Elastic Dies: Emphasis on Computational Cost.” 2019. Web. 23 Jan 2021.
Vancouver:
Allesson S. Sheet Metal Forming Simulations with Elastic Dies: Emphasis on Computational Cost. [Internet] [Thesis]. , Department of Mechanical Engineering; 2019. [cited 2021 Jan 23].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18236.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Allesson S. Sheet Metal Forming Simulations with Elastic Dies: Emphasis on Computational Cost. [Thesis]. , Department of Mechanical Engineering; 2019. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18236
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Florida
8.
Zhao, Songlin.
From Fixed to Adaptive Budget Robust Kernel Adaptive Filtering.
Degree: PhD, Electrical and Computer Engineering, 2012, University of Florida
URL: https://ufdc.ufl.edu/UFE0044951
► Recently, owning to universal modeling capacity, convexity in performance surface and modest computational complexity, kernel adaptive filters have attracted more and more attention. Even though…
(more)
▼ Recently, owning to universal modeling capacity, convexity in performance surface and modest
computational complexity, kernel adaptive filters have attracted more and more attention. Even though these methods achieve powerful classification and regression performance in complicated nonlinear problems, they have drawbacks. This work focuses on how to improve kernel adaptive filters performance both on accuracy and
computational complexity. After reviewing some existing adaptive filters
cost functions, we introduce an information theoretic objective function, Maximal Correntropy Criterion (MCC), that contains high order statistical information. Here we propose to adopt this objective function for kernel adaptive filters to improve system accuracy performance in nonlinear and non-Gaussian scenario. To determine the free parameter, kernel width in correntropy, an adaptive method based on the statistical property of the prediction error is proposed. After that we propose a growing and pruning method to realize a fixed-budget kernel least mean square (KLMS) algorithm based on improvements of the quantized kernel least mean square algorithm and a new significance measure. The end result is to control the
computational complexity and memory requirement of kernel adaptive filters while preserving the accuracy as much as possible. This balance between accuracy and filter model order is explored from the perspective of information learning. Indeed the issue is how to deal with the trade-off between system complexity and accuracy performance, and an information learning criterion called Minimal Description Length (MDL) is introduced to kernel adaptive filtering. Two formulations of MDL: batch and online model are developed and illustrated by approximation level selection in KRLS-ALD and center dictionary selection in KLMS respectively. The end result is a methodology that controls the kernel adaptive filter dictionary (model order) according to the complexity of the true system and the input signal for online learning even in nonstationary environments. ( en )
Advisors/Committee Members: Principe, Jose C (committee chair), Rangarajan, Anand (committee member), Shea, John M (committee member), Chen, Yunmei (committee member).
Subjects/Keywords: Adaptive filters; Approximation; Computational complexity; Cost functions; Input data; Neural networks; Pruning; Signal processing; Simulations; Time series; kenrel – songlin
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhao, S. (2012). From Fixed to Adaptive Budget Robust Kernel Adaptive Filtering. (Doctoral Dissertation). University of Florida. Retrieved from https://ufdc.ufl.edu/UFE0044951
Chicago Manual of Style (16th Edition):
Zhao, Songlin. “From Fixed to Adaptive Budget Robust Kernel Adaptive Filtering.” 2012. Doctoral Dissertation, University of Florida. Accessed January 23, 2021.
https://ufdc.ufl.edu/UFE0044951.
MLA Handbook (7th Edition):
Zhao, Songlin. “From Fixed to Adaptive Budget Robust Kernel Adaptive Filtering.” 2012. Web. 23 Jan 2021.
Vancouver:
Zhao S. From Fixed to Adaptive Budget Robust Kernel Adaptive Filtering. [Internet] [Doctoral dissertation]. University of Florida; 2012. [cited 2021 Jan 23].
Available from: https://ufdc.ufl.edu/UFE0044951.
Council of Science Editors:
Zhao S. From Fixed to Adaptive Budget Robust Kernel Adaptive Filtering. [Doctoral Dissertation]. University of Florida; 2012. Available from: https://ufdc.ufl.edu/UFE0044951

University of New South Wales
9.
Osman, Ilham.
Optimal Finite Control Set Model Predictive Control Strategies for Induction Motor Drives.
Degree: Electrical Engineering & Telecommunications, 2020, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/67166
;
https://unsworks.unsw.edu.au/fapi/datastream/unsworks:67789/SOURCE02?view=true
► The finite control set model predictive control (FCS-MPC) for motor drives has been vigorously investigated during the past decade for control of current, torque, stator…
(more)
▼ The finite control set model predictive control (FCS-MPC) for motor drives has been vigorously investigated during the past decade for control of current, torque, stator flux linkage, and other performances in a motor drive system. Model predictive torque control (MPTC) and flux control (MPFC) are two popular categories of FCS-MPC for motor drives. In FCS-MPTC, a finite number of possible voltage vectors are evaluated by a
cost function in an iterative prediction loop. The
cost function includes several control objectives, such as minimization of torque and flux errors, the neutral-point voltage of multi-level inverters and inverter switching frequency. The control algorithm determines the optimal voltage vector that minimizes the pre-defined
cost function. Each variable included in the
cost function has a weighting factor according to different magnitudes and units. Weighting factor tuning is a non-trivial and complicated task, particularly when the control algorithm has two control-objectives which are prime variables i.e., torque and flux. In such a case, the control algorithm may choose a global optimal solution for torque which will be a sub-optimal solution for flux. In general, a conventional FCS-MPTC algorithm has a large
computational burden due to a large number of available voltage vectors as in multi-level inverters or discrete SVM MPC technique and B) the presence of weighting factors in the
cost function with several control objectives. This thesis developed a two-stage optimization-based FCS-MPC algorithm for an induction motor drive that uses reduced voltage control sets (RVCS) to evaluate the pre-defined
cost function in the prediction loop. The proposed algorithm lowers the
computational burden of the controller in two cascaded stages for a three-level neutral-point clamped voltage source inverter (3L-NPC VSI) fed IM drive. The voltage vector selection in the first stage of the proposed two-stage algorithm is executed using two different approaches: six long voltage vectors in the first stage and three long voltage vectors in the first stage. The first approach presented in Chapter 3 evaluates all six long voltage vectors in the first stage to obtain a long optimal voltage vector. In the second stage, the nearest 11 voltage vectors of the optimal long voltage vector are evaluated to reach the final optimal voltage vector. The
cost function values from both stages are compared to each other to select the final optimal voltage vector. In chapter 4, the second approach evaluates three long voltage vectors instead of 6 in the first stage. It uses the sign of stator flux deviation and the position of stator flux to select the voltage vectors in the first stage. The second stage of this flux-error based approach is the same as Chapter 3. The proposed FCS-MPC two-stage optimization algorithms of Chapter 3-5 for an IM drive reduces the
computational burden of the conventional algorithm which evaluates all available 27 voltage vectors. In Chapter 3, the proposed algorithm’s
cost function combined torque and…
Advisors/Committee Members: Rahman, Fazlur, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW.
Subjects/Keywords: Cost function; Model predictive control; Induction motor drive; Inverters; Optimization; Computational burden; Sub-optimal; Flux control; Nonlinear control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Osman, I. (2020). Optimal Finite Control Set Model Predictive Control Strategies for Induction Motor Drives. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/67166 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:67789/SOURCE02?view=true
Chicago Manual of Style (16th Edition):
Osman, Ilham. “Optimal Finite Control Set Model Predictive Control Strategies for Induction Motor Drives.” 2020. Doctoral Dissertation, University of New South Wales. Accessed January 23, 2021.
http://handle.unsw.edu.au/1959.4/67166 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:67789/SOURCE02?view=true.
MLA Handbook (7th Edition):
Osman, Ilham. “Optimal Finite Control Set Model Predictive Control Strategies for Induction Motor Drives.” 2020. Web. 23 Jan 2021.
Vancouver:
Osman I. Optimal Finite Control Set Model Predictive Control Strategies for Induction Motor Drives. [Internet] [Doctoral dissertation]. University of New South Wales; 2020. [cited 2021 Jan 23].
Available from: http://handle.unsw.edu.au/1959.4/67166 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:67789/SOURCE02?view=true.
Council of Science Editors:
Osman I. Optimal Finite Control Set Model Predictive Control Strategies for Induction Motor Drives. [Doctoral Dissertation]. University of New South Wales; 2020. Available from: http://handle.unsw.edu.au/1959.4/67166 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:67789/SOURCE02?view=true
10.
Ravindran, Rajeswaran Chockalingapuram.
Scheduling Heuristics for Maximizing the Output Quality of Iris Task Graphs in Multiprocessor Environment with Time and Energy Bounds.
Degree: MS, Electrical & Computer Engineering, 2012, University of Massachusetts
URL: https://scholarworks.umass.edu/theses/826
► Embedded real time applications are often subject to time and energy constraints. Real time applications are usually characterized by logically separable set of tasks…
(more)
▼ Embedded real time applications are often
subject to time and energy constraints. Real time applications are usually characterized by logically separable set of tasks with precedence constraints. The
computational effort behind each of the task in the system is responsible for a physical functionality of the embedded system. In this work we mainly define theoretical models for relating the quality of the physical func- tionality to the
computational load of the tasks and develop optimization problems to maximize the quality of the system
subject to various constraints like time and energy. Specifically, the novelties in this work are three fold. This work deals with maximizing the final output quality of a set of precedence constrained tasks whose quality can be expressed with appropriate
cost functions. We have developed heuristic scheduling algorithms for maximizing the quality of final output of embedded applications. This work also dealswith the fact that the quality of output of a task in the system has noticeable effect on quality of output of the other dependent tasks in the system. Finally run time characteristics of the tasks are also modeled by simulating a distribution of run times for the tasks, which provides for averaged quality of output for the system rather than un-sampled quality based on arbitrary run times. Many real-time tasks fall into the IRIS (Increased Reward with Increased Service) category. Such tasks can be prematurely terminated at the
cost of poorer quality output. In this work, we study the scheduling of IRIS tasks on multiprocessors. IRIS tasks may be dependent, with one task feeding other tasks in a Task Precedence Graph (TPG). Task output quality depends on the quality of the input data as well as on the execution time that is allowed. We study the allocation/scheduling of IRIS TPGs on multiprocessors to maximize output quality. The heuristics developed can effectively reclaim resources when tasks finish earlier than their estimated worst-case execution time. Dynamic voltage scaling is used to manage energy consumption and keep it within specified bounds.
Advisors/Committee Members: C Mani Krishna, Israel Koren.
Subjects/Keywords: precedence constraints; computational load; cost functions; reward; service; dynamic voltage scaling; Other Computer Engineering; Other Electrical and Computer Engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ravindran, R. C. (2012). Scheduling Heuristics for Maximizing the Output Quality of Iris Task Graphs in Multiprocessor Environment with Time and Energy Bounds. (Masters Thesis). University of Massachusetts. Retrieved from https://scholarworks.umass.edu/theses/826
Chicago Manual of Style (16th Edition):
Ravindran, Rajeswaran Chockalingapuram. “Scheduling Heuristics for Maximizing the Output Quality of Iris Task Graphs in Multiprocessor Environment with Time and Energy Bounds.” 2012. Masters Thesis, University of Massachusetts. Accessed January 23, 2021.
https://scholarworks.umass.edu/theses/826.
MLA Handbook (7th Edition):
Ravindran, Rajeswaran Chockalingapuram. “Scheduling Heuristics for Maximizing the Output Quality of Iris Task Graphs in Multiprocessor Environment with Time and Energy Bounds.” 2012. Web. 23 Jan 2021.
Vancouver:
Ravindran RC. Scheduling Heuristics for Maximizing the Output Quality of Iris Task Graphs in Multiprocessor Environment with Time and Energy Bounds. [Internet] [Masters thesis]. University of Massachusetts; 2012. [cited 2021 Jan 23].
Available from: https://scholarworks.umass.edu/theses/826.
Council of Science Editors:
Ravindran RC. Scheduling Heuristics for Maximizing the Output Quality of Iris Task Graphs in Multiprocessor Environment with Time and Energy Bounds. [Masters Thesis]. University of Massachusetts; 2012. Available from: https://scholarworks.umass.edu/theses/826
11.
Vieira, Hiparco Lins.
Redução do custo computacional do algoritmo RRT através de otimização por eliminação.
Degree: Mestrado, Sistemas Dinâmicos, 2014, University of São Paulo
URL: http://www.teses.usp.br/teses/disponiveis/18/18153/tde-05092014-163621/
;
► A aplicação de técnicas baseadas em amostragem em algoritmos que envolvem o planejamento de trajetórias de robôs tem se tornado cada vez mais difundida. Deste…
(more)
▼ A aplicação de técnicas baseadas em amostragem em algoritmos que envolvem o planejamento de trajetórias de robôs tem se tornado cada vez mais difundida. Deste grupo, um dos algoritmos mais utilizados é chamado Rapidly-exploring Random Tree (RRT), que se baseia na amostragem incremental para calcular de forma eficiente os planos de trajetória do robô evitando colisões com obstáculos. Vários esforços tem sido realizados a fim de reduzir o custo computacional do algoritmo RRT, visando aplicações que necessitem de respostas mais rápidas do algoritmo, como, por exemplo, em ambientes dinâmicos. Um dos dilemas relacionados ao RRT está na etapa de geração de primitivas de movimento. Se várias primitivas são geradas, permitindo o robô executar vários movimentos básicos diferentes, um grande custo computacional é gasto. Por outro lado, quando poucas primitivas são geradas e, consequentemente, poucos movimentos básicos são permitidos, o robô pode não ser capaz de encontrar uma solução para o problema, mesmo que esta exista. Motivados por este problema, um método de geração de primitivas de movimento foi proposto. Tal método é comparado com os métodos tradicional e aleatório de geração de primitivas, considerando não apenas o custo computacional de cada um, mas também a qualidade da solução obtida. O método proposto é aplicado ao algoritmo RRT, que depois é aplicado em um caso de estudo em um ambiente dinâmico. No estudo de caso, o algoritmo RRT otimizado é avaliado em termos de seus custos computacionais durante planejamentos e replanejamento de trajetória. As simulações são realizadas em dois simuladores: um desenvolvido em linguagem Python e outro em Matlab.
The application of sample-based techniques in path-planning algorithms has become year-by-year more widespread. In this group, one of the most widely used algorithms is the Rapidly-exploring Random Tree (RRT), which is based on an incremental sampling of configurations to efficiently compute the robot\'s path while avoiding obstacles. Many efforts have been made to reduce RRT computational costs, targeting, in particular, applications in which quick responses are required, e.g., in dynamic environments. One of the dilemmas posed by the RRT arises from its motion primitives generation. If many primitives are generated to enable the robot to perform a broad range of basic movements, a signicant computational cost is required. On the other hand, when only a few primitives are generated, thus, enabling a limited number of basic movements, the robot may be unable to find a solution to the problem, even if one exists. To address this quandary, an optimized method for primitive generation is proposed. This method is compared with the traditional and random primitive generation methods, considering not only computational cost, but also the quality of local and global solutions that may be attained. The optimized method is applied to the RRT algorithm, which is then used in a case study in dynamic environments. In the study, the modied RRT is evaluated in terms of the…
Advisors/Committee Members: Grassi Júnior, Valdir.
Subjects/Keywords: Ambientes dinâmicos; Computational cost; Custo computacional; Dynamic environments; Motion primitives; Optimization; Otimização; Path planning; Path replanning; Planejamento de trajetórias; Primitivas de movimento; Replanejamento de trajetórias; RRT; RRT
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Vieira, H. L. (2014). Redução do custo computacional do algoritmo RRT através de otimização por eliminação. (Masters Thesis). University of São Paulo. Retrieved from http://www.teses.usp.br/teses/disponiveis/18/18153/tde-05092014-163621/ ;
Chicago Manual of Style (16th Edition):
Vieira, Hiparco Lins. “Redução do custo computacional do algoritmo RRT através de otimização por eliminação.” 2014. Masters Thesis, University of São Paulo. Accessed January 23, 2021.
http://www.teses.usp.br/teses/disponiveis/18/18153/tde-05092014-163621/ ;.
MLA Handbook (7th Edition):
Vieira, Hiparco Lins. “Redução do custo computacional do algoritmo RRT através de otimização por eliminação.” 2014. Web. 23 Jan 2021.
Vancouver:
Vieira HL. Redução do custo computacional do algoritmo RRT através de otimização por eliminação. [Internet] [Masters thesis]. University of São Paulo; 2014. [cited 2021 Jan 23].
Available from: http://www.teses.usp.br/teses/disponiveis/18/18153/tde-05092014-163621/ ;.
Council of Science Editors:
Vieira HL. Redução do custo computacional do algoritmo RRT através de otimização por eliminação. [Masters Thesis]. University of São Paulo; 2014. Available from: http://www.teses.usp.br/teses/disponiveis/18/18153/tde-05092014-163621/ ;
12.
Antonio Carlos Vilanova.
Otimização de um modelo de propagação com múltiplos obstáculos na troposfera utilizando algoritmo genético.
Degree: 2013, Federal University of Uberlândia
URL: http://www.bdtd.ufu.br//tde_busca/arquivo.php?codArquivo=4703
► This thesis presents an evaluation methodology to optimize parameters in a model of propagation of electromagnetic waves in the troposphere. The propagation model is based…
(more)
▼ This thesis presents an evaluation methodology to optimize parameters in a model of propagation of electromagnetic waves in the troposphere. The propagation model is based on parabolic equations solved by Split-Step Fourier. This propagation model shows good efficiency and rough terrain situations where the refractivity varies with distance. The search for optimal parameters in models involving electromagnetic waves requires a large computational cost, especially in large search spaces. Aiming to reduce the computational cost in determining the parameter values that maximize the field strength at a given position of the observer was developed an application called EP-AG. The application has two main modules. The first is the propagation module that estimates the value of the electric field in the area of a given terrain irregularities and varying with the refractivity with distance. The second is the optimization module which finds the optimum antenna height and frequency of operation that lead the field to the maximum value of the land in a certain position. Initially performed only the propagation module using different profiles of land and refractivity. The results shown by contours and profile field shown the efficiency of the model. Subsequently to evaluate the optimization by genetic algorithms were used two different settings as well as the irregularity of the terrain, refractivity profile and size of the search space. In each of these settings picked up a point observation in which the value of the electric field served as a metric for comparison. At this point, we determined the optimal values of the parameters by the brute force method and the genetic algorithm optimization. The results showed that for small search spaces virtually no reduction of the computational cost, however for large search spaces, the decrease was very significant and relative errors much smaller than those obtained by the method of brute force.
Esta tese apresenta uma avaliação metodológica para otimizar parâmetros em um modelo de propagação de ondas eletromagnéticas na troposfera. O modelo de propagação é baseado em equações parabólicas resolvidas pelo Divisor de Passos de Fourier. Esse modelo de propagação apresenta boa eficiência em terrenos irregulares e situações em que a refratividade varia com a distância. A busca de parâmetros ótimos em modelos que envolvem ondas eletromagnéticas demanda um grande custo computacional, principalmente em grandes espaços de busca. Com o objetivo de diminuir o custo computacional na determinação dos valores dos parâmetros que maximizem a intensidade de campo em uma determinada posição do observador, foi desenvolvido um aplicativo denominado EP-AG. O aplicativo possui dois módulos principais. O primeiro é o módulo de propagação, que estima o valor do campo elétrico na área de um determinado terreno com irregularidades e com a refratividade variando com a distância. O segundo é o módulo de otimização, que encontra o valor ótimo da altura da antena e da frequência de operação que levam o campo ao…
Advisors/Committee Members: Gilberto Arantes Carrijo, Luciano Xavier Medeiros, Alexandre Coutinho Mateus, Paulo Sérgio Caparelli, Valtemir Emerencio do Nascimento.
Subjects/Keywords: Algoritmos genéticos; Equações parabólicas; Divisor de passos de Fourier; Custo computacional; ENGENHARIA ELETRICA; Ondas eletromagnéticas; Algoritmos genéticos; Genetic algorithms; Parabolic equations; Split-step Fourier; Computational cost
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Vilanova, A. C. (2013). Otimização de um modelo de propagação com múltiplos obstáculos na troposfera utilizando algoritmo genético. (Thesis). Federal University of Uberlândia. Retrieved from http://www.bdtd.ufu.br//tde_busca/arquivo.php?codArquivo=4703
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Vilanova, Antonio Carlos. “Otimização de um modelo de propagação com múltiplos obstáculos na troposfera utilizando algoritmo genético.” 2013. Thesis, Federal University of Uberlândia. Accessed January 23, 2021.
http://www.bdtd.ufu.br//tde_busca/arquivo.php?codArquivo=4703.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Vilanova, Antonio Carlos. “Otimização de um modelo de propagação com múltiplos obstáculos na troposfera utilizando algoritmo genético.” 2013. Web. 23 Jan 2021.
Vancouver:
Vilanova AC. Otimização de um modelo de propagação com múltiplos obstáculos na troposfera utilizando algoritmo genético. [Internet] [Thesis]. Federal University of Uberlândia; 2013. [cited 2021 Jan 23].
Available from: http://www.bdtd.ufu.br//tde_busca/arquivo.php?codArquivo=4703.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Vilanova AC. Otimização de um modelo de propagação com múltiplos obstáculos na troposfera utilizando algoritmo genético. [Thesis]. Federal University of Uberlândia; 2013. Available from: http://www.bdtd.ufu.br//tde_busca/arquivo.php?codArquivo=4703
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Universidade do Minho
13.
Marques, Raquel Marina Coelho.
Uma abordagem construtivista de “Terra no Espaço”: um estudo centrado nas explicações dos alunos do 7.º ano de escolaridade
.
Degree: 2019, Universidade do Minho
URL: http://hdl.handle.net/1822/62144
► Em Portugal, a Astronomia é um tema ensinado aos alunos do 3.º ciclo do ensino básico (7.º ano de escolaridade), mas muitas vezes gera explicações…
(more)
▼ Em Portugal, a Astronomia é um tema ensinado aos alunos do 3.º ciclo do
ensino básico (7.º ano de escolaridade), mas muitas vezes gera explicações para
fenómenos astronómicos que não são aceites do ponto de vista científico. Vários
autores têm mencionado que a ausência de observações do céu noturno e as
conceções alternativas dos alunos sobre Astronomia têm limitado a sua
compreensão.
O presente relatório de estágio, inserido na unidade curricular “Estágio
Profissional”, do Mestrado em Ensino de Física e Química no 3.º ciclo do Ensino
Básico e no Ensino Secundário, descreve uma intervenção pedagógica com alunos
do 7.º ano. Na intervenção foram usados simulações computacionais e atividades
laboratoriais com materiais de baixo custo no ensino do tema “Terra no Espaço”.
Esses recursos foram inseridos numa abordagem construtivista e a sua eficácia foi
avaliada em termos do seu impacto nas explicações dos alunos sobre os fenómenos
astronómicos. Antes da abordagem de ensino, foi aplicado um questionário a 16
alunos, a fim de analisar as suas explicações sobre os fenómenos astronómicos e
como eles os representam. Os resultados deste questionário permitiram o desenho
da abordagem de ensino a ser utilizada na aula. Após a intervenção, um
questionário foi aplicado aos alunos para analisar se as suas explicações evoluíram
ou não.
No geral, os resultados mostraram que os alunos melhoraram as suas
explicações sobre os fenómenos. No entanto, tanto depois como antes da
intervenção pedagógica, as explicações dos alunos para os fenómenos incluíam
conceções alternativas. Foram fornecidas implicações e sugestões para futuras
intervenções pedagógicas.
Advisors/Committee Members: Afonso, Ana Sofia (advisor).
Subjects/Keywords: Atividades com materiais de baixo custo;
Construtivismo;
Ensino de astronomia;
Simulação computacional;
Activities with low cost materials;
Astronomy teaching;
Computational simulation;
Constructivism
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Marques, R. M. C. (2019). Uma abordagem construtivista de “Terra no Espaço”: um estudo centrado nas explicações dos alunos do 7.º ano de escolaridade
. (Masters Thesis). Universidade do Minho. Retrieved from http://hdl.handle.net/1822/62144
Chicago Manual of Style (16th Edition):
Marques, Raquel Marina Coelho. “Uma abordagem construtivista de “Terra no Espaço”: um estudo centrado nas explicações dos alunos do 7.º ano de escolaridade
.” 2019. Masters Thesis, Universidade do Minho. Accessed January 23, 2021.
http://hdl.handle.net/1822/62144.
MLA Handbook (7th Edition):
Marques, Raquel Marina Coelho. “Uma abordagem construtivista de “Terra no Espaço”: um estudo centrado nas explicações dos alunos do 7.º ano de escolaridade
.” 2019. Web. 23 Jan 2021.
Vancouver:
Marques RMC. Uma abordagem construtivista de “Terra no Espaço”: um estudo centrado nas explicações dos alunos do 7.º ano de escolaridade
. [Internet] [Masters thesis]. Universidade do Minho; 2019. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/1822/62144.
Council of Science Editors:
Marques RMC. Uma abordagem construtivista de “Terra no Espaço”: um estudo centrado nas explicações dos alunos do 7.º ano de escolaridade
. [Masters Thesis]. Universidade do Minho; 2019. Available from: http://hdl.handle.net/1822/62144

Universitat Pompeu Fabra
14.
Marcos Sanmartín, Encarni.
Embodied decision making and its neural substrate.
Degree: Departament de Tecnologies de la Informació i les Comunicacions, 2014, Universitat Pompeu Fabra
URL: http://hdl.handle.net/10803/285379
► Les decisions són el resultat d'un procés de deliberació que avalua la idoneïtat d'opcions específiques. Els estudis sobre la presa de decisions han estat principalment…
(more)
▼ Les decisions són el resultat d'un procés de deliberació que avalua la idoneïtat d'opcions específiques. Els estudis sobre la presa de decisions han estat principalment dirigits fent servir tasques restringides a les quals, als humans o animals, se'ls demana escollir entre opcions. No obstant, la influència que factors relacionats amb la corporificació de la presa de decisions podrien tenir en aquest procés s'ha ignorat freqüentment. En aquesta tesi, adoptem un enfocament experimental i teòric combinat per tal d'examinar la influència que aquests factors tenen en la presa de decisions. Els nostres resultats confirmen un important esbiaixat del comportament i de l'activitat neuronal degut a factors externs a l'objectiu de la tasca en sí. Fem servir models computacionals per tal d'interpretar aquest esbiaixat que, a la vegada, ens dóna una intuïció del mecanisme que l'està produint. La tesi conclou amb la presentació d'un únic model que integra tots els descobriments presentats i que podria utilitzar-se com a nou marc teòric per a recerques futures. En general, els resultats inclosos aquí es tradueixen en un significant progrés a la comprensió de la presa de decisions corporificada, aportant nous coneixements sobre els mecanismes neuronals i models teòrics.
Advisors/Committee Members: [email protected] (authoremail), true (authoremailshow), Verschure, Paul F. M. J. (director), true (authorsendemail).
Subjects/Keywords: Embodiment; Decision making; Robots; Across-trials variability; Memory; Context; Motor cost; Modulation; Uncertainty; Computational model; Corporificación; Toma de decisiones; Variabilidad entre pruebas; Memoria; Contexto; Coste motor; Modulación; Incertidumbre; Modelo computacional; Corporificació; Presa de decisions; Variabilitat entre proves; Memòria; Context; Cost motor; Modulació; Incertesa; Model computacional; 62
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Marcos Sanmartín, E. (2014). Embodied decision making and its neural substrate. (Thesis). Universitat Pompeu Fabra. Retrieved from http://hdl.handle.net/10803/285379
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Marcos Sanmartín, Encarni. “Embodied decision making and its neural substrate.” 2014. Thesis, Universitat Pompeu Fabra. Accessed January 23, 2021.
http://hdl.handle.net/10803/285379.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Marcos Sanmartín, Encarni. “Embodied decision making and its neural substrate.” 2014. Web. 23 Jan 2021.
Vancouver:
Marcos Sanmartín E. Embodied decision making and its neural substrate. [Internet] [Thesis]. Universitat Pompeu Fabra; 2014. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/10803/285379.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Marcos Sanmartín E. Embodied decision making and its neural substrate. [Thesis]. Universitat Pompeu Fabra; 2014. Available from: http://hdl.handle.net/10803/285379
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Linköping University
15.
Ekholm, Harald.
Cost optimization in the cloud : An analysis on how to apply an optimization framework to the procurement of cloud contracts at Spotify.
Degree: Production Economics, 2020, Linköping University
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168441
► In the modern era of IT, cloud computing is becoming the new standard. Companies have gone from owning their own data centers to procuring virtualized…
(more)
▼ In the modern era of IT, cloud computing is becoming the new standard. Companies have gone from owning their own data centers to procuring virtualized computational resources as a service. This technology opens up for elasticity and cost savings. Computational resources have gone from being a capital expenditure to an operational expenditure. Vendors, such as Google, Amazon, and Microsoft, offer these services globally with different provisioning alternatives. In this thesis, we focus on providing a cost optimization algorithm for Spotify on the Google Cloud Platform. To achieve this we construct an algorithm that breaks up the problem in four different parts. Firstly, we generate trajectories of monthly active users. Secondly, we split these trajectories up in regions and redistribute monthly active users to better describe the actual Google Cloud Platform footprint. Thirdly we calculate usage per monthly active users quotas from a representative week of usage and use these to translate the redistributed monthly active users trajectories to usage. Lastly, we apply an optimization algorithm to these trajectories and obtain an objective value. These results are then evaluated using statistical methods to determine the reliability. The final model solves the problem to optimality and provides statistically reliable results. As a consequence, we can give recommendations to Spotify on how to minimize their cloud cost, while considering the uncertainty in demand.
Subjects/Keywords: cloud computing; Spotify; monte carlo; MC; stochastic processes; stochastic programming; Google Cloud Platform; GCP; IT; computational resources; optimization; cost optimization; forecast; procurement; Mathematics; Matematik; Probability Theory and Statistics; Sannolikhetsteori och statistik; Mathematical Analysis; Matematisk analys; Computational Mathematics; Beräkningsmatematik
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ekholm, H. (2020). Cost optimization in the cloud : An analysis on how to apply an optimization framework to the procurement of cloud contracts at Spotify. (Thesis). Linköping University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168441
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ekholm, Harald. “Cost optimization in the cloud : An analysis on how to apply an optimization framework to the procurement of cloud contracts at Spotify.” 2020. Thesis, Linköping University. Accessed January 23, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168441.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ekholm, Harald. “Cost optimization in the cloud : An analysis on how to apply an optimization framework to the procurement of cloud contracts at Spotify.” 2020. Web. 23 Jan 2021.
Vancouver:
Ekholm H. Cost optimization in the cloud : An analysis on how to apply an optimization framework to the procurement of cloud contracts at Spotify. [Internet] [Thesis]. Linköping University; 2020. [cited 2021 Jan 23].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168441.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ekholm H. Cost optimization in the cloud : An analysis on how to apply an optimization framework to the procurement of cloud contracts at Spotify. [Thesis]. Linköping University; 2020. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168441
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Texas – Austin
16.
Thammarak, Punchet.
Dynamic response of laterally-loaded piles.
Degree: PhD, Civil Engineering, 2009, University of Texas – Austin
URL: http://hdl.handle.net/2152/6578
► The laterally-loaded pile has long been a topic of research interest. Several models of the soil surrounding a pile have been developed for simulation of…
(more)
▼ The laterally-loaded pile has long been a topic of research interest. Several models of the soil surrounding a pile have been developed for simulation
of lateral pile behavior, ranging from simple spring and dashpot models to sophisticated three-dimensional finite-element models. However, results from
the available pile-soil models are not accurate due to inherent approximations
or constraints. For the springs and dashpots representation, the real and
imaginary stiffness are calculated by idealizing the soil domain as a series of plane-strain slices leading to unrealistic pile behavior at low frequencies while
the three-dimensional finite-element analysis is very computationally demanding. Therefore, this dissertation research seeks to contribute toward procedures that are computationally
cost-effective while accuracy of the computed
response is maintained identical or close to that of the three-dimensional finite-element solution. Based on the fact that purely-elastic soil displacement variations in azimuthal direction are known, the surrounding soil can be formulated in terms of an equivalent one-dimensional model leading to a significant reduction of
computational cost. The pile with conventional soil-slice model will
be explored first. Next, models with shear stresses between soil slices, including and neglecting the soil vertical displacement, are investigated. Excellent agreement of results from the proposed models with three-dimensional finite-element solutions can be achieved with only small additional
computational cost.
Advisors/Committee Members: Tassoulas, John Lambros (advisor).
Subjects/Keywords: Laterally-loaded piles; Computational cost; One-dimensional models; Soil slices; Soil displacement; Three-dimensional finite-element models; Cost-effectiveness; Soil models; Pile models; Simulation methods
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Thammarak, P. (2009). Dynamic response of laterally-loaded piles. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/6578
Chicago Manual of Style (16th Edition):
Thammarak, Punchet. “Dynamic response of laterally-loaded piles.” 2009. Doctoral Dissertation, University of Texas – Austin. Accessed January 23, 2021.
http://hdl.handle.net/2152/6578.
MLA Handbook (7th Edition):
Thammarak, Punchet. “Dynamic response of laterally-loaded piles.” 2009. Web. 23 Jan 2021.
Vancouver:
Thammarak P. Dynamic response of laterally-loaded piles. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2009. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/2152/6578.
Council of Science Editors:
Thammarak P. Dynamic response of laterally-loaded piles. [Doctoral Dissertation]. University of Texas – Austin; 2009. Available from: http://hdl.handle.net/2152/6578
17.
Μητροπούλου, Χαρίκλεια.
Advanced computational methods for seismic design and assessment of reinforced concrete structures.
Degree: 2011, National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ)
URL: http://hdl.handle.net/10442/hedi/24794
► The major objective of this Dissertation is to develop an integrated framework for the economical and safe antiseismic design and assessment of new reinforced concrete…
(more)
▼ The major objective of this Dissertation is to develop an integrated framework for the economical and safe antiseismic design and assessment of new reinforced concrete structures by means of life-cycle cost and fragility analysis. This objective of the dissertation is achieved through the accomplishment of the following tasks: (i) At the first part of the Dissertation numerical calibration for some of the most popular damage indices (DIs) that have been proposed by many researchers was performed, in order to quantify the extent of damage in reinforced concrete structures. (ii) A critical assessment of prescriptive design procedures was performed with reference to their ability to lead to safe and economical designs. Furthermore, a comparison between prescriptive and performance-based seismic design procedures was carried out. For this purpose a number of structural seismic design optimisation problems have been formulated. On the other hand, based on the calibrated DIs, structural optimization problems were formulated aiming at identifying the DI, or the combination of Dis that will provide reliable information on damage so that they can be incorporated into a Performance-Based Design framework. The ultimate objective of this task is to compare lower-bound designs that satisfy the design code requirements in the most cost-effective way using a Life-Cycle Cost Analysis (LCCA) methodology. (iii) The next step is to improve the LCCA procedure with reference to both its robustness and efficiency. (iv) The last objective of the dissertation is to improve the fragility analysis procedure with reference to both robustness and efficiency. The efficiency is achieved by introducing a neural network-based incremental dynamic analysis (IDA) procedure that reduces the computational effort by one order of magnitude.
Ο κύριος στόχος της διατριβής ήταν η ανάπτυξη ενός ολοκληρωμένου πλαισίου για την αξιολόγηση και τον οικονομικό-ασφαλή αντισεισμικό σχεδιασμό κατασκευών. Αυτός ο καθολικός στόχος της διατριβής επετεύχθη μέσω των ακόλουθων βημάτων: (i) Στο πρώτο μέρος της διατριβής πραγματοποιήθηκε αριθμητική βαθμονόμηση ορισμένων από τους πιο δημοφιλείς δείκτες βλάβης που έχουν προταθεί και υιοθετηθεί από πολλούς ερευνητές προκειμένου να προσδιορίσουν το επίπεδο ζημίας κατασκευών από οπλισμένο σκυρόδεμα. (ii) Στο δεύτερο στάδιο της διατριβής πραγματοποιήθηκε αξιολόγηση των περιγραφικών διαδικασιών αντισεισμικού σχεδιασμού σε σχέση με τον συντελεστή συμπεριφοράς που υιοθετείται από τους Ευρωκώδικες και διερευνήθηκε η βέλτιστη επιλογή που οδηγεί στον οικονομικότερο και ασφαλέστερο σχεδιασμό. Επιπροσθέτως έγινε σύγκριση των μεθόδων σχεδιασμού με βάση την επίδοση σε σχέση με τις περιγραφικές μεθόδους σχεδιασμού. Στη συνέχεια, με βάση τους βαθμονομημένους δείκτες βλάβης, διατυπώθηκαν προβλήματα βελτιστοποίησης με στόχο να προσδιοριστεί ο δείκτης βλάβης ή ο συνδυασμός δεικτών βλάβης που αποτελούν την σωστότερη επιλογή προκειμένου να ενσωματωθεί στο πλαίσιο σχεδιασμού που βασίζεται στην επιτελεστικότητα. (iii) Ως επόμενο βήμα της διατριβής…
Subjects/Keywords: Υπολογιστική αντισεισμική μηχανική; Δείκτες βλάβης; Ανάλυση κόστους κύκλου ζωής; Ανάλυση τρωτότητας; Βέλτιστος σχεδιασμός; Νευρωνικά δίκτυα; Computational earthquake engineering; Damage indices; Life cycle cost analysis; Fragility analysis; Optimum design of structures; Neural networks
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Μητροπούλου, . . (2011). Advanced computational methods for seismic design and assessment of reinforced concrete structures. (Thesis). National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ). Retrieved from http://hdl.handle.net/10442/hedi/24794
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Μητροπούλου, Χαρίκλεια. “Advanced computational methods for seismic design and assessment of reinforced concrete structures.” 2011. Thesis, National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ). Accessed January 23, 2021.
http://hdl.handle.net/10442/hedi/24794.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Μητροπούλου, Χαρίκλεια. “Advanced computational methods for seismic design and assessment of reinforced concrete structures.” 2011. Web. 23 Jan 2021.
Vancouver:
Μητροπούλου . Advanced computational methods for seismic design and assessment of reinforced concrete structures. [Internet] [Thesis]. National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ); 2011. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/10442/hedi/24794.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Μητροπούλου . Advanced computational methods for seismic design and assessment of reinforced concrete structures. [Thesis]. National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ); 2011. Available from: http://hdl.handle.net/10442/hedi/24794
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
18.
Herouard, Nicolas.
Optimisation, analyse et comparaison de méthodes numériques déterministes par la dynamique des gaz raréfiés : Optimization, analysis and comparison of deterministic numerical methods for rarefied gas dynamics.
Degree: Docteur es, Mathématiques appliquées et calcul scientifique, 2014, Bordeaux
URL: http://www.theses.fr/2014BORD0473
► Lors de la rentrée atmosphérique, l’écoulement raréfié de l’air autour de l’objet rentrant est régi par un modèle cinétique dérivé de l’équation de Boltzmann ;…
(more)
▼ Lors de la rentrée atmosphérique, l’écoulement raréfié de l’air autour de l’objet rentrant est régi par un modèle cinétique dérivé de l’équation de Boltzmann ; celui-ci décrit l’évolution d’une fonction de distribution des particules de gaz dans l’espace des phases, de dimension 6 dans le cas général. La simulation numérique déterministe de cet écoulement requiert donc le traitement d’une quantité considérable de données, soit un espace mémoire et un temps de calcul importants. Nous étudions dans ce travail différents moyens de réduire le coût de ces calculs. La première approche est une méthode permettant d’optimiser la taille de la grille de vitesses discrètes employée dans le calcul par une prédiction de l’allure des fonctions de distribution dans l’espace des vitesses, en supposant un faible déséquilibre thermodynamique du gaz. La seconde approche consiste à essayer d’exploiter les propriétés de préservation asymptotique des schémas Galerkin Discontinu, déjà établies dans le cadre du transport linéaire des neutrons, qui permettent de tenir compte des effets de la couche limite cinétique sans que celle-ci soit résolue par le maillage, alors que les méthodes classiques (comme les Volumes Finis) imposent l’utilisation de maillages très raffinés en zone de proche paroi. Dans une dernière partie, nous comparons les performances respectives de ces schémas Galerkin Discontinu et de quelques schémas Volumes Finis, appliqués au modèle BGK sur un cas simple, en étudiant en particulier leur comportement près des parois et les conditions aux limites numériques.
During the atmospheric re-entry of a space engine, the rarefied air flow around the body is determined by a kinetic model derived from the Boltzmann equation, which describes the evolution of a distribution function of gas molecules in the phase space, this means a 6-dimensional space in the general case. Consequently, a deterministic numerical simulation of this flow requires large computational ressources, both in memory storage and CPU time. The aim of this work is to reduce those ressources, using two different approaches. The first one is a method allowing to optimize the size of the discrete velocity grid used for the computation by a prediction of the shape of the distributions in the velocity space, assuming that the gas is close to thermodynamic equilibrium. The second approach is an attempt to use the asymptotic preservation properties of Discontinuous Galerkin schemes, already established for neutron transport, which allow to take into account the effects of kinetic boundary layers even if they are not resolved by the mesh, while classical methods (such as Finite Volumes) require very refined meshes along the direction normal to the walls. In a last part, we compare the performances of these Discontinuous Galerkin schemes with some classical Finite Volumes schemes, applied to the BGK equation in a simple case, and pay particular attention to their near-wall behavior and numerical boundary conditions.
Advisors/Committee Members: Mieussens, Luc (thesis director).
Subjects/Keywords: Aérodynamique; Conditions aux limites; Équation de Boltzmann-BGK; Coût de calcul; Limite asymptotique; Schémas Galerkin Discontinu; Méthodes numériques; Régime raréfié; Aerodynamics; Boundary conditions; Computational cost; Asymptotic limit; Discontinuous Galerkin schemes; Numerical methods; Boltzmann-BGK equation; Rarefied regime
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Herouard, N. (2014). Optimisation, analyse et comparaison de méthodes numériques déterministes par la dynamique des gaz raréfiés : Optimization, analysis and comparison of deterministic numerical methods for rarefied gas dynamics. (Doctoral Dissertation). Bordeaux. Retrieved from http://www.theses.fr/2014BORD0473
Chicago Manual of Style (16th Edition):
Herouard, Nicolas. “Optimisation, analyse et comparaison de méthodes numériques déterministes par la dynamique des gaz raréfiés : Optimization, analysis and comparison of deterministic numerical methods for rarefied gas dynamics.” 2014. Doctoral Dissertation, Bordeaux. Accessed January 23, 2021.
http://www.theses.fr/2014BORD0473.
MLA Handbook (7th Edition):
Herouard, Nicolas. “Optimisation, analyse et comparaison de méthodes numériques déterministes par la dynamique des gaz raréfiés : Optimization, analysis and comparison of deterministic numerical methods for rarefied gas dynamics.” 2014. Web. 23 Jan 2021.
Vancouver:
Herouard N. Optimisation, analyse et comparaison de méthodes numériques déterministes par la dynamique des gaz raréfiés : Optimization, analysis and comparison of deterministic numerical methods for rarefied gas dynamics. [Internet] [Doctoral dissertation]. Bordeaux; 2014. [cited 2021 Jan 23].
Available from: http://www.theses.fr/2014BORD0473.
Council of Science Editors:
Herouard N. Optimisation, analyse et comparaison de méthodes numériques déterministes par la dynamique des gaz raréfiés : Optimization, analysis and comparison of deterministic numerical methods for rarefied gas dynamics. [Doctoral Dissertation]. Bordeaux; 2014. Available from: http://www.theses.fr/2014BORD0473

University of Maryland
19.
Iyengar, Deepak.
Effect of Transaction Cost and Coordination Mechanisms on the Length of the Supply Chain.
Degree: Decision and Information Technologies, 2005, University of Maryland
URL: http://hdl.handle.net/1903/3178
► A drastic reduction in the cost of transmitting information has tremendously increased the °ow and availability of information. Greater availability of information increases the ¯rm's…
(more)
▼ A drastic reduction in the
cost of transmitting information has tremendously increased the
°ow and availability of information. Greater availability of information increases the ¯rm's ability
to manage its supply chain and, therefore, increases its operational performance. However, current
literature is ambiguous about whether increased information °ows leads to either a reduction or
increase in transaction
cost, which enable supply chains to migrate towards more market-based
transactions or hierarchal-based transactions. This research empirically demonstrates that the
governance structure of the supply chains changes towards market-based transactions due to a
lowering of transaction costs after 1987. Much of the results is based on the theory of Transaction
Cost Economics (TCE) and the role of asset speci¯city, uncertainty, and frequency in determin-
ing whether or not industries are moving towards markets or hierarchies. Unlike previous supply
chain management literature that focuses on relatively short supply chains consisting of two or
three supply chain members, Input-Output tables allow for analysis of supply chains with many
more members. This paper uses the 1982, 1987, 1992, and 1997 U.S. Benchmark Input-Output
tables published by the Bureau of Economic Analysis to analyze supply chains. In so doing, this
dissertation not only provides insight into how supply chain structures are changing but also o®ers
a sample methodology for other researchers interested in using Input-Output analysis for further
supply chain management research.
The second part of the dissertation focuses on looking at the e®ect of di®erent coordination
mechanisms on supply chain length and supply chain performance using simulation. Three di®erent
heuristics that model ordering policies are used to simulate coordination mechanisms. E±ciency is
measured on the basis of minimized total net stock for each heuristic used. The results are checked
for robustness by using four di®erent demand distributions. The results indicate that if a supply
chain has minimized its net stock, then the heuristic used by various echelons in the supply chain
need not be harmonized. Also, disintermediation helps in improving the performance of the supply
chain.
Advisors/Committee Members: Bailey, Joseph P. (advisor), Evers, Philip T. (advisor).
Subjects/Keywords: Business Administration, Management; Business Administration, General; Transaction Cost Economics; Input-Output Analysis; Supply Chain; Computational Economics; Coordination
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Iyengar, D. (2005). Effect of Transaction Cost and Coordination Mechanisms on the Length of the Supply Chain. (Thesis). University of Maryland. Retrieved from http://hdl.handle.net/1903/3178
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Iyengar, Deepak. “Effect of Transaction Cost and Coordination Mechanisms on the Length of the Supply Chain.” 2005. Thesis, University of Maryland. Accessed January 23, 2021.
http://hdl.handle.net/1903/3178.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Iyengar, Deepak. “Effect of Transaction Cost and Coordination Mechanisms on the Length of the Supply Chain.” 2005. Web. 23 Jan 2021.
Vancouver:
Iyengar D. Effect of Transaction Cost and Coordination Mechanisms on the Length of the Supply Chain. [Internet] [Thesis]. University of Maryland; 2005. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/1903/3178.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Iyengar D. Effect of Transaction Cost and Coordination Mechanisms on the Length of the Supply Chain. [Thesis]. University of Maryland; 2005. Available from: http://hdl.handle.net/1903/3178
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
20.
Δροσίτης, Ιωάννης.
Δρομολόγηση φωλιασμένων βρόχων με τη χρήση μεθόδων υπολογιστικής γεωμετρίας.
Degree: 2002, National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ)
URL: http://hdl.handle.net/10442/hedi/16725
► One of the major problems in the area of parallel processing, is scheduling nested loops. This thesis studies the problem of scheduling nested loops with…
(more)
▼ One of the major problems in the area of parallel processing, is scheduling nested loops. This thesis studies the problem of scheduling nested loops with uniform dependencies, focusing onto the geometric properties of the index space and the dependence vectors of the problem. Initially, a new algorithm for mapping onto distributed memory architectures is presented. The followed technique is based on mapping onto systolic arrays. After the index space transformation is performed, the transformed index space needs to be partitioned and mapped to the given architecture. The partitioning is made along the directions of the transformed index space boundaries. This method achieves optimal load balancing between processors, although scheduling complexity is not optimal. After that, the geometric properties of the index space considered in the uniform nested loop problems are studied. The formal expressions that give the execution wavefront of any time instance are presented. Although they can only be solved by integer linear programming techniques, it is the first time they are computed in closed form. Some special cases are also noted. In these cases, certain inequalities between the algorithm dependence vectors hold and the complexity of computing the execution wavefront decreases dramatically: it can be computed in polynomial or even linear time. Based on the above geometric properties, a technique for scheduling any nested loop problem that follows the unit execution - zero communication time model - UET is presented. This scheduling method is nearly optimal. In addition to the above, we introduce a method of reducing problems that consider the unit execution - unit communication time model - UET-UCT to pure UET equivalent ones. This is one of the main contributions of our work, as it enables any scheduling technique of the bibliography for the UET cases, to apply to any problem that considers unit communication cost. During the scheduling algorithms presented in this thesis, a new method of computing the hyperplane vector Π is concerned. This method is explicitly based on the algorithm dependence vectors and on how these vectors allow the execution of the index space points. Actually, it is a revision of the convex hull computing method, called QuickHull. It should be noted that, in comparison with all other methods of the bibliography, it achieves quite a small complexity. Finally, the generalization of the above scheduling techniques for n-dimensional problems is presented. Certain simulation programs have been implemented, in order to testify the scheduling methods of this thesis. The proposed geometric method is compared with the hyperplane one, as well as with the exhaustive scheduling procedure, also called optimal. The comparison results are analytically presented.
Πρωταρχικό πρόβλημα στο χώρο της παράλληλης επεξεργασίας, αποτελεί η δρομολόγηση φωλιασμένων βρόχων. Η παρούσα διατριβή μελετά τη δρομολόγηση φωλιασμένων βρόχων με ομοιόμορφες εξαρτήσεις, εστιάζοντας στις γεωμετρικές ιδιότητες των εξαρτήσεων και…
Subjects/Keywords: Δρομολόγηση; Φωλιασμένοι βρόχοι; Κόστος επικοινωνίας; Υπολογιστική γεωμετρία; Κυρτό περίγραμμα; Scheduling; Nested loops; Communication cost; Computational geometry; Convex hull; UET - UCT
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Δροσίτης, . . (2002). Δρομολόγηση φωλιασμένων βρόχων με τη χρήση μεθόδων υπολογιστικής γεωμετρίας. (Thesis). National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ). Retrieved from http://hdl.handle.net/10442/hedi/16725
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Δροσίτης, Ιωάννης. “Δρομολόγηση φωλιασμένων βρόχων με τη χρήση μεθόδων υπολογιστικής γεωμετρίας.” 2002. Thesis, National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ). Accessed January 23, 2021.
http://hdl.handle.net/10442/hedi/16725.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Δροσίτης, Ιωάννης. “Δρομολόγηση φωλιασμένων βρόχων με τη χρήση μεθόδων υπολογιστικής γεωμετρίας.” 2002. Web. 23 Jan 2021.
Vancouver:
Δροσίτης . Δρομολόγηση φωλιασμένων βρόχων με τη χρήση μεθόδων υπολογιστικής γεωμετρίας. [Internet] [Thesis]. National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ); 2002. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/10442/hedi/16725.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Δροσίτης . Δρομολόγηση φωλιασμένων βρόχων με τη χρήση μεθόδων υπολογιστικής γεωμετρίας. [Thesis]. National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ); 2002. Available from: http://hdl.handle.net/10442/hedi/16725
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
21.
Lee, Jin Woo.
Multi-level Decoupled Optimization of Wind Turbine
Structures Using Coefficients of Approximating Functions as Design
Variables.
Degree: PhD, Mechanical Engineering, 2017, University of Toledo
URL: http://rave.ohiolink.edu/etdc/view?acc_num=toledo1501003238831086
► This dissertation proposes a multi-level optimization method for slender structures such as blades or towers of wind turbine structures. This method is suited performing structural…
(more)
▼ This dissertation proposes a multi-level optimization
method for slender structures such as blades or towers of wind
turbine structures. This method is suited performing structural
optimizations of slender structures with a large number of design
variables (DVs). The proposed method uses a two-level optimization
process: a high-level for a global optimization of a structure and
a low-level for optimizations of sectioned
computational stations
of the structure. The high-level optimization uses approximating
functions to define target structural properties along the length
of a structure, such as stiffness. The approximating functions are
functions of the distance from the root of the structure that are
defined using basis functions such as polynomials or exponential
functions. The high-level DVs are the coefficients of the
functions. Thus, the number of the high-level DVs is independent of
the number of sections. Moreover, selecting smooth approximating
functions help to obtain alternative designs with smooth shapes.The
low-level optimization finds an optimum parametric design, such as
laminate layups, that matches with the target structural properties
defined at the high-level optimization. At the low-level
optimization, the proposed method uses an optimizer in each
section. Each optimizer is independent of the optimizers in the
other sections, thereby decomposing a large optimization problem
into several small ones. This approach reduces the number of DVs
per optimizer at the low-level optimization which reduces the
design space of each section and eliminates the design space of
coupling between sections. Once optimum designs are found from all
sections at the low-level, the high-level solvers evaluate them for
the entire structure.The advantage of the proposed method is that
it reduces the number of iterations of the high-level optimization
because it considers a small number of high-level DVs.
Computational efficiency increases because the computationally
extensive high-level solvers need to be run less frequently to
obtain an optimum solution. An additional advantage of the proposed
method is that it produces many feasible alternatives.Using example
problems, the paper demonstrates that the proposed method converges
faster in the early iterations, and generates more alternative
designs with smooth geometry than traditional single-level
methods.
Advisors/Committee Members: Nikolaidis, Efstratios (Committee Chair), Devabhaktuni, Vijay (Committee Co-Chair), Afjeh, Abdollah (Committee Co-Chair).
Subjects/Keywords: Energy; Engineering; Environmental Economics; Environmental Engineering; Mechanical Engineering; Operations Research; Aerospace Engineering; optimization; multi-level; decoupled; structural; design; analysis; approximating function; design variable; computational cost; composite material; renewable energy; wind turbine; blade; tower; modeFRONTIER; FAST; Department of Energy
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lee, J. W. (2017). Multi-level Decoupled Optimization of Wind Turbine
Structures Using Coefficients of Approximating Functions as Design
Variables. (Doctoral Dissertation). University of Toledo. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=toledo1501003238831086
Chicago Manual of Style (16th Edition):
Lee, Jin Woo. “Multi-level Decoupled Optimization of Wind Turbine
Structures Using Coefficients of Approximating Functions as Design
Variables.” 2017. Doctoral Dissertation, University of Toledo. Accessed January 23, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=toledo1501003238831086.
MLA Handbook (7th Edition):
Lee, Jin Woo. “Multi-level Decoupled Optimization of Wind Turbine
Structures Using Coefficients of Approximating Functions as Design
Variables.” 2017. Web. 23 Jan 2021.
Vancouver:
Lee JW. Multi-level Decoupled Optimization of Wind Turbine
Structures Using Coefficients of Approximating Functions as Design
Variables. [Internet] [Doctoral dissertation]. University of Toledo; 2017. [cited 2021 Jan 23].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=toledo1501003238831086.
Council of Science Editors:
Lee JW. Multi-level Decoupled Optimization of Wind Turbine
Structures Using Coefficients of Approximating Functions as Design
Variables. [Doctoral Dissertation]. University of Toledo; 2017. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=toledo1501003238831086
22.
Viricel, Clement.
Contributions au développement d'outils computationnels de design de protéine : méthodes et algorithmes de comptage avec garantie : Contribution to protein design tools : counting methods and algorithms.
Degree: Docteur es, Mathématiques Appliquées, 2017, Toulouse, INSA
URL: http://www.theses.fr/2017ISAT0019
► Cette thèse porte sur deux sujets intrinsèquement liés : le calcul de la constante de normalisation d’un champ de Markov et l’estimation de l’affinité de…
(more)
▼ Cette thèse porte sur deux sujets intrinsèquement liés : le calcul de la constante de normalisation d’un champ de Markov et l’estimation de l’affinité de liaison d’un complexe de protéines. Premièrement, afin d’aborder ce problème de comptageP complet, nous avons développé Z*, basé sur un élagage des quantités de potentiels négligeables. Il s’est montré plus performant que des méthodes de l’état de l’art sur des instances issues d’interaction protéine-protéine. Par la suite, nous avons développé #HBFS, un algorithme avec une garantie anytime, qui s’est révélé plus performant que son prédécesseur. Enfin, nous avons développé BTDZ, un algorithme exact basé sur une décomposition arborescente qui a fait ses preuves sur des instances issues d’interaction intermoléculaire appelées “superhélices”. Ces algorithmes s’appuient sur des méthodes issuse des modèles graphiques : cohérences locales, élimination de variable et décompositions arborescentes. A l’aide de méthodes d’optimisation existantes, de Z* et des fonctions d’énergie de Rosetta, nous avons développé un logiciel open source estimant la constante d’affinité d’un complexe protéine protéine sur une librairie de mutants. Nous avons analysé nos estimations sur un jeu de données de complexes de protéines et nous les avons confronté à deux approches de l’état de l’art. Il en est ressorti que notre outil était qualitativement meilleur que ces méthodes.
This thesis is focused on two intrinsically related subjects : the computation of the normalizing constant of a Markov random field and the estimation of the binding affinity of protein-protein interactions. First, to tackle this #P-complete counting problem, we developed Z*, based on the pruning of negligible potential quantities. It has been shown to be more efficient than various state-of-the-art methods on instances derived from protein-protein interaction models. Then, we developed #HBFS, an anytime guaranteed counting algorithm which proved to be even better than its predecessor. Finally, we developed BTDZ, an exact algorithm based on tree decomposition. BTDZ has already proven its efficiency on intances from coiled coil protein interactions. These algorithms all rely on methods stemming from graphical models : local consistencies, variable elimination and tree decomposition. With the help of existing optimization algorithms, Z* and Rosetta energy functions, we developed a package that estimates the binding affinity of a set of mutants in a protein-protein interaction. We statistically analyzed our esti- mation on a database of binding affinities and confronted it with state-of-the-art methods. It appears that our software is qualitatively better than these methods.
Advisors/Committee Members: Schiex, Thomas (thesis director), Barbe, Sophie (thesis director).
Subjects/Keywords: Modèle graphique; Champ de Markov; Réseau de fonctions de coût; Comptage; P complet; Fonction de partition; Constante de normalisation; Design computationnel de protéine; Affinité de liaison; Interaction protéine-protéine; Algorithm; Markov random field; Cost function network; Couning; #P complete; Partition function; Normalizing constant; Computational protein design; Binding affinity; Protein-protein interaction; 510; 004
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Viricel, C. (2017). Contributions au développement d'outils computationnels de design de protéine : méthodes et algorithmes de comptage avec garantie : Contribution to protein design tools : counting methods and algorithms. (Doctoral Dissertation). Toulouse, INSA. Retrieved from http://www.theses.fr/2017ISAT0019
Chicago Manual of Style (16th Edition):
Viricel, Clement. “Contributions au développement d'outils computationnels de design de protéine : méthodes et algorithmes de comptage avec garantie : Contribution to protein design tools : counting methods and algorithms.” 2017. Doctoral Dissertation, Toulouse, INSA. Accessed January 23, 2021.
http://www.theses.fr/2017ISAT0019.
MLA Handbook (7th Edition):
Viricel, Clement. “Contributions au développement d'outils computationnels de design de protéine : méthodes et algorithmes de comptage avec garantie : Contribution to protein design tools : counting methods and algorithms.” 2017. Web. 23 Jan 2021.
Vancouver:
Viricel C. Contributions au développement d'outils computationnels de design de protéine : méthodes et algorithmes de comptage avec garantie : Contribution to protein design tools : counting methods and algorithms. [Internet] [Doctoral dissertation]. Toulouse, INSA; 2017. [cited 2021 Jan 23].
Available from: http://www.theses.fr/2017ISAT0019.
Council of Science Editors:
Viricel C. Contributions au développement d'outils computationnels de design de protéine : méthodes et algorithmes de comptage avec garantie : Contribution to protein design tools : counting methods and algorithms. [Doctoral Dissertation]. Toulouse, INSA; 2017. Available from: http://www.theses.fr/2017ISAT0019
23.
Li, Zhongliang.
Data-driven fault diagnosis for PEMFC systems : Integrating representation and classification methods for obstacle detection in road scenes.
Degree: Docteur es, Automatique, 2014, Aix Marseille Université
URL: http://www.theses.fr/2014AIXM4335
► Cette thèse est consacrée à l'étude de diagnostic de pannes pour les systèmes pile à combustible de type PEMFC. Le but est d'améliorer la fiabilité…
(more)
▼ Cette thèse est consacrée à l'étude de diagnostic de pannes pour les systèmes pile à combustible de type PEMFC. Le but est d'améliorer la fiabilité et la durabilité de la membrane électrolyte polymère afin de promouvoir la commercialisation de la technologie des piles à combustible. Les approches explorées dans cette thèse sont celles du diagnostic guidé par les données. Les techniques basées sur la reconnaissance de forme sont les plus utilisées. Dans ce travail, les variables considérées sont les tensions des cellules. Les résultats établis dans le cadre de la thèse peuvent être regroupés en trois contributions principales.La première contribution est constituée d'une étude comparative. Plus précisément, plusieurs méthodes sont explorées puis comparées en vue de déterminer une stratégie précise et offrant un coût de calcul optimal.La deuxième contribution concerne le diagnostic online sans connaissance complète des défauts au préalable. Il s'agit d'une technique adaptative qui permet d'appréhender l'apparition de nouveaux types de défauts. Cette technique est fondée sur la méthodologie SSM-SVM et les règles de détection et de localisation ont été améliorées pour répondre au problème du diagnostic en temps réel.La troisième contribution est obtenue à partir méthodologie fondée sur l'utilisation partielle de modèles dynamiques. Le principe de détection et localisation de défauts est fondé sur des techniques d'identification et sur la génération de résidus directement à partir des données d'exploitation.Toutes les stratégies proposées dans le cadre de la thèse ont été testées à travers des données expérimentales et validées sur un système embarqué.
Aiming at improving the reliability and durability of Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems and promote the commercialization of fuel cell technologies, this thesis work is dedicated to the fault diagnosis study for PEMFC systems. Data-driven fault diagnosis is the main focus in this thesis. As a main branch of data-driven fault diagnosis, the methods based on pattern classification techniques are firstly studied. Taking individual fuel cell voltages as original diagnosis variables, several representative methodologies are investigated and compared from the perspective of online implementation.Specific to the defects of conventional classification based diagnosis methods, a novel diagnosis strategy is proposed. A new classifier named Sphere-Shaped Multi-class Support Vector Machine (SSM-SVM) and modified diagnostic rules are utilized to realize the novel fault recognition. While an incremental learning method is extended to achieve the online adaptation.Apart from the classification based diagnosis approach, a so-called partial model-based data-driven approach is introduced to handle PEMFC diagnosis in dynamic processes. With the aid of a subspace identification method (SIM), the model-based residual generation is designed directly from the normal and dynamic operating data. Then, fault detection and isolation are further realized by evaluating the…
Advisors/Committee Members: Outbib, Rachid (thesis director), Giurgea, Stefan (thesis director), Hissel, Daniel (thesis director).
Subjects/Keywords: Système PEMFC; Diagnostic en ligne; Des tensions cellulaires; Reconnaissance de forme; La précision du diagnostic; Le coût de calcul; Systèmes embarqués; La détection des défauts roman; Adaptation en ligne; L'identification du modèle; PEMFC system; Online diagnosis; Cell voltages; Pattern classification; Diagnosis accuracy; Computational cost; Embedded system; Novel fault detection; Online adaptation; Model identification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, Z. (2014). Data-driven fault diagnosis for PEMFC systems : Integrating representation and classification methods for obstacle detection in road scenes. (Doctoral Dissertation). Aix Marseille Université. Retrieved from http://www.theses.fr/2014AIXM4335
Chicago Manual of Style (16th Edition):
Li, Zhongliang. “Data-driven fault diagnosis for PEMFC systems : Integrating representation and classification methods for obstacle detection in road scenes.” 2014. Doctoral Dissertation, Aix Marseille Université. Accessed January 23, 2021.
http://www.theses.fr/2014AIXM4335.
MLA Handbook (7th Edition):
Li, Zhongliang. “Data-driven fault diagnosis for PEMFC systems : Integrating representation and classification methods for obstacle detection in road scenes.” 2014. Web. 23 Jan 2021.
Vancouver:
Li Z. Data-driven fault diagnosis for PEMFC systems : Integrating representation and classification methods for obstacle detection in road scenes. [Internet] [Doctoral dissertation]. Aix Marseille Université 2014. [cited 2021 Jan 23].
Available from: http://www.theses.fr/2014AIXM4335.
Council of Science Editors:
Li Z. Data-driven fault diagnosis for PEMFC systems : Integrating representation and classification methods for obstacle detection in road scenes. [Doctoral Dissertation]. Aix Marseille Université 2014. Available from: http://www.theses.fr/2014AIXM4335
24.
MADHUKAR, ENUGURTHI.
GENERATE TEST SELECTION STATISTICS WITH AUTOMATED MUTATION TESTING.
Degree: 2018, , Department of Computer Science and Engineering
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16836
► Context: The goal of this research is to form a correlation between code packages and test cases which is done by using automated weak…
(more)
▼ Context: The goal of this research is to form a correlation between code packages and test cases which is done by using automated weak mutation. The correlations formed is used as the statistical test data for selecting relevant tests from the test suite which decreases the size of the test suite and speed up the process. Objectives: In this study, we have done an investigation of existing methods for reducing the computational cost of automatic mutation testing. After the investigation, we build an open source automatic mutation tool that mutates the source code to run on the test cases of the mutated code that maps the failed test to the part of the code that was changed. The failed test cases give the correlation between the test and the source code which is collected as data for future use of the test selection. Methods: Literature review and Experimentation is chosen for this research. It was a controlled experiment done at the Swedish ICT company to mutate the camera codes and test them using the regression test suite. The camera codes provided are from the continuous integration of historical data. We have chosen experimentation as our research because as this method of research is more focused on analyzing the data and implementing a tool using historical data. A literature review is done to know what kind of mutation testing reduces the computational cost of the testing process. The implementation of this process is done by using experimentation Results: The comparative results obtained after mutating the source code with regular mutants and weak mutants we have found that regular mutants and weak mutants are compared with their correlation accuracy and we found that on regular mutation operators we got 62.1% correlation accuracy and coming to weak mutation operators we got 85% of the correlation accuracy. Conclusions: This research on experimentation to form the correlations in generating test selection statistics using automated mutation testing in the continuous integration environment for improving test cases selection in regression testing
Subjects/Keywords: Automation Testing; Test case Selection; Mutation Testing; Weak Mutants; Computational Cost; Regression Testing; Continuous Integration.; Computer Sciences; Datavetenskap (datalogi)
…27
6.5
REDUCTION OF COMPUTATIONAL COST… …Explanation
Generated higher order mutants
but cannot reduce the
computational cost.
Generated… …mutants. Using semantic mutation tool leads to an increase in the
computational cost. C mutate… …computational cost of the
mutation testing process. This is the motivation for considering Milu as our… …computational cost of the system. By using this we are not affecting any code quality. The…
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
MADHUKAR, E. (2018). GENERATE TEST SELECTION STATISTICS WITH AUTOMATED MUTATION TESTING. (Thesis). , Department of Computer Science and Engineering. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16836
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
MADHUKAR, ENUGURTHI. “GENERATE TEST SELECTION STATISTICS WITH AUTOMATED MUTATION TESTING.” 2018. Thesis, , Department of Computer Science and Engineering. Accessed January 23, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16836.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
MADHUKAR, ENUGURTHI. “GENERATE TEST SELECTION STATISTICS WITH AUTOMATED MUTATION TESTING.” 2018. Web. 23 Jan 2021.
Vancouver:
MADHUKAR E. GENERATE TEST SELECTION STATISTICS WITH AUTOMATED MUTATION TESTING. [Internet] [Thesis]. , Department of Computer Science and Engineering; 2018. [cited 2021 Jan 23].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16836.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
MADHUKAR E. GENERATE TEST SELECTION STATISTICS WITH AUTOMATED MUTATION TESTING. [Thesis]. , Department of Computer Science and Engineering; 2018. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16836
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
25.
Mojir, Kayran Yousefi.
A Computational Model for Optimal Dimensional Speed on New High-Speed Lines.
Degree: Information and Communication Technology (ICT), 2011, KTH
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37230
► High Speed Lines (HSL) in rail passenger services are regarded as one of the most significant projects in many countries comparing to other projects…
(more)
▼ High Speed Lines (HSL) in rail passenger services are regarded as one of the most significant projects in many countries comparing to other projects in the transportation area. According to the EU (European Council Directive 96/48/EC,2004) , high-speed lines are either new-built lines for speeds of 250km/h or greater, or in some cases upgraded traditional lines. At the beginning of 2008, there were 10,000 km of new HSL lines in operation, and by taking into account the upgraded conventional lines, in total, there were 20,000 km line in the world. The network is growing fast because of the demand for short travelling time and comfort isincreasing rapidly.
Since HSL projects require a lot of capital, it is getting more important for governments and companies to estimate and to calculate the total costs and benefits of building, maintaining, and operating of HSL so that they can decide better and more reliable in choosing between projects.
There are many parameters which affect the total costs and benefits of an HSL. The most important parameter is dimensional speed which has a great influence on other parameters. For example, tunnels need larger cross section for higher speed which increases construction costs. More important, higher speed also influences the number of passengers attracted from other modes of transport. Due to a large number of speed-dependant parameters, it is not a simple task to estimate an optimal dimensional speed by calculating the costs and benefits of an HSL manually. It is also difficult to do analysis for different speeds, as speed changes many other relevant parameters. As a matter of fact, there is a need for a computational model to calculate the cost-benefit for different speeds. Based on the computational model, it is possible to define different scenarios and compare them to each other to see what the potentially optimal speed would be for a new HSL project. Besides the optimal speed, it is also possible to analyze and find effects of two other important parameters, fare and frequency, by cost-benefit analysis (CBA). The probability model used in the calculation is based on an elasticity model, and input parameters are subject to flexibility to calibrate the model appropriately. Optimal high-speed line (OHSL) tool is developed to make the model accessible for the users.
Subjects/Keywords: HSL (High Speed Line); dimensional speed; computational model; speeddependant parameter; elasticity model; CBA (Cost Benefit Analysis); OHSL
…49
A Computational model for HSL ver. 1.0| April 2011
TABLE 6.1 : COST-BENEFIT VALUES FOR… …categorized into two parts (figure 1.1):
The first is to create a computational cost… …cost of building and
A Computational model for HSL ver. 1.0| April 2011
1.6 Options… …A Computational model for HSL ver. 1.0| April 2011
5.5 DIFFERENT OUTPUTS OF THE OHSL… …97
COST-BENEFIT FOR DIFFERENT SPEEDS – NEW SCENARIO…
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mojir, K. Y. (2011). A Computational Model for Optimal Dimensional Speed on New High-Speed Lines. (Thesis). KTH. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37230
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Mojir, Kayran Yousefi. “A Computational Model for Optimal Dimensional Speed on New High-Speed Lines.” 2011. Thesis, KTH. Accessed January 23, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37230.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Mojir, Kayran Yousefi. “A Computational Model for Optimal Dimensional Speed on New High-Speed Lines.” 2011. Web. 23 Jan 2021.
Vancouver:
Mojir KY. A Computational Model for Optimal Dimensional Speed on New High-Speed Lines. [Internet] [Thesis]. KTH; 2011. [cited 2021 Jan 23].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37230.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Mojir KY. A Computational Model for Optimal Dimensional Speed on New High-Speed Lines. [Thesis]. KTH; 2011. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37230
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Linköping University
26.
Olofsson, Anders.
Modern Stereo Correspondence Algorithms : Investigation and Evaluation.
Degree: Information Coding, 2010, Linköping University
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57853
► Many different approaches have been taken towards solving the stereo correspondence problem and great progress has been made within the field during the last…
(more)
▼ Many different approaches have been taken towards solving the stereo correspondence problem and great progress has been made within the field during the last decade. This is mainly thanks to newly evolved global optimization techniques and better ways to compute pixel dissimilarity between views. The most successful algorithms are based on approaches that explicitly model smoothness assumptions made about the physical world, with image segmentation and plane fitting being two frequently used techniques.
Within the project, a survey of state of the art stereo algorithms was conducted and the theory behind them is explained. Techniques found interesting were implemented for experimental trials and an algorithm aiming to achieve state of the art performance was implemented and evaluated. For several cases, state of the art performance was reached.
To keep down the computational complexity, an algorithm relying on local winner-take-all optimization, image segmentation and plane fitting was compared against minimizing a global energy function formulated on pixel level. Experiments show that the local approach in several cases can match the global approach, but that problems sometimes arise – especially when large areas that lack texture are present. Such problematic areas are better handled by the explicit modeling of smoothness in global energy minimization.
Lastly, disparity estimation for image sequences was explored and some ideas on how to use temporal information were implemented and tried. The ideas mainly relied on motion detection to determine parts that are static in a sequence of frames. Stereo correspondence for sequences is a rather new research field, and there is still a lot of work to be made.
Subjects/Keywords: stereo correspondence; stereo matching; cost function; cost aggregation; image segmentation; plane fitting; RANSAC; graph cuts; belief propagation; disparity; depth estimation; Computer Vision and Robotics (Autonomous Systems); Datorseende och robotik (autonoma system); Other Engineering and Technologies not elsewhere specified; Övrig annan teknik; Computational Mathematics; Beräkningsmatematik
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Olofsson, A. (2010). Modern Stereo Correspondence Algorithms : Investigation and Evaluation. (Thesis). Linköping University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57853
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Olofsson, Anders. “Modern Stereo Correspondence Algorithms : Investigation and Evaluation.” 2010. Thesis, Linköping University. Accessed January 23, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57853.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Olofsson, Anders. “Modern Stereo Correspondence Algorithms : Investigation and Evaluation.” 2010. Web. 23 Jan 2021.
Vancouver:
Olofsson A. Modern Stereo Correspondence Algorithms : Investigation and Evaluation. [Internet] [Thesis]. Linköping University; 2010. [cited 2021 Jan 23].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57853.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Olofsson A. Modern Stereo Correspondence Algorithms : Investigation and Evaluation. [Thesis]. Linköping University; 2010. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57853
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
27.
Γιώτης, Αλέξιος.
Χρήση εξελικτικών τεχνικών, υπολογιστικής ευφυΐας και μεθόδων υπολογιστικής ρευστομηχανικής στη βελτιστοποίηση - αντίστροφη σχεδίαση πτερυγώσεων στροβιλομηχανών, μέσω παράλληλης επεξεργασίας.
Degree: 2003, National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ)
URL: http://hdl.handle.net/10442/hedi/16974
Subjects/Keywords: Αλγόριθμοι, Εξελικτικοί; Σχεδίαση πτερυγώσεων στροβιλομηχανών; Παράλληλη επεξεργασία; Βελτιστοποίηση; Δίκτυα, Νευρωνικά; Προβλήματα πολλαπλών στόχων; Μεταμοντέλα; Μείωση υπολογιστικού κόστους; Evolutionary algorithms; Turbomachinery cascade design; Parallel procesisng; Optimization; Networks, Neural; Multi-objective problems; Metamodels; Reduction of computational cost
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Γιώτης, . . (2003). Χρήση εξελικτικών τεχνικών, υπολογιστικής ευφυΐας και μεθόδων υπολογιστικής ρευστομηχανικής στη βελτιστοποίηση - αντίστροφη σχεδίαση πτερυγώσεων στροβιλομηχανών, μέσω παράλληλης επεξεργασίας. (Thesis). National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ). Retrieved from http://hdl.handle.net/10442/hedi/16974
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Γιώτης, Αλέξιος. “Χρήση εξελικτικών τεχνικών, υπολογιστικής ευφυΐας και μεθόδων υπολογιστικής ρευστομηχανικής στη βελτιστοποίηση - αντίστροφη σχεδίαση πτερυγώσεων στροβιλομηχανών, μέσω παράλληλης επεξεργασίας.” 2003. Thesis, National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ). Accessed January 23, 2021.
http://hdl.handle.net/10442/hedi/16974.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Γιώτης, Αλέξιος. “Χρήση εξελικτικών τεχνικών, υπολογιστικής ευφυΐας και μεθόδων υπολογιστικής ρευστομηχανικής στη βελτιστοποίηση - αντίστροφη σχεδίαση πτερυγώσεων στροβιλομηχανών, μέσω παράλληλης επεξεργασίας.” 2003. Web. 23 Jan 2021.
Vancouver:
Γιώτης . Χρήση εξελικτικών τεχνικών, υπολογιστικής ευφυΐας και μεθόδων υπολογιστικής ρευστομηχανικής στη βελτιστοποίηση - αντίστροφη σχεδίαση πτερυγώσεων στροβιλομηχανών, μέσω παράλληλης επεξεργασίας. [Internet] [Thesis]. National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ); 2003. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/10442/hedi/16974.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Γιώτης . Χρήση εξελικτικών τεχνικών, υπολογιστικής ευφυΐας και μεθόδων υπολογιστικής ρευστομηχανικής στη βελτιστοποίηση - αντίστροφη σχεδίαση πτερυγώσεων στροβιλομηχανών, μέσω παράλληλης επεξεργασίας. [Thesis]. National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ); 2003. Available from: http://hdl.handle.net/10442/hedi/16974
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Aristotle University Of Thessaloniki (AUTH); Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης (ΑΠΘ)
28.
Μήττας, Νικόλαος.
Στατιστικές και υπολογιστικές μέθοδοι ανάπτυξης, βελτίωσης και σύγκρισης μοντέλων πρόβλεψης κόστους λογισμικού.
Degree: 2009, Aristotle University Of Thessaloniki (AUTH); Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης (ΑΠΘ)
URL: http://hdl.handle.net/10442/hedi/20557
► The plethora of Software Cost Estimation models proposed in the literature reveals that the prediction of the cost for a new software project is a…
(more)
▼ The plethora of Software Cost Estimation models proposed in the literature reveals that the prediction of the cost for a new software project is a vital task affecting the well-balanced management of the development process. The overestimation of a project may lead to the canceling and loss of a contract, whereas the underestimation may affect the earnings of the development organization. Hence, there is an ongoing research in the SCE area attempting to build prediction models that provide accurate estimates of the cost. The present dissertation deals with the introduction of statistical and computational methods for the comparison, improvement and development of Software Cost Estimation models. More specifically, the contribution of the dissertation focuses on the following subjects. Chapter 3 deals with the comparison procedure of alternative prediction models. Since there are a lot of models that can be fitted to certain data, a crucial issue is the selection of the most efficient prediction model. Most often this selection is based on comparisons of various accuracy measures that are functions of the model’s errors. However, the usual practice is to consider as the most accurate prediction model the one providing the best accuracy measure without testing if this superiority is in fact statistically significant. This policy can lead to unstable and erroneous conclusions since a small change in the data is able to turn over the best model selection. On the other hand, the accuracy measures used in practice are statistics with unknown probability distributions, making the testing of any hypothesis, by the traditional parametric methods, problematic. In this Chapter, the use of statistical simulation tools is proposed in order to test the significance of the difference between the accuracy of two prediction methods. The statistical simulation procedures involve permutation tests and bootstrap techniques for the construction of confidence intervals for the difference of measures. These techniques repeat the data analysis a large number of times on replicated datasets, all drawn by resampling from the original observed set of data. The resampling techniques can be used on their own in carrying out a hypothesis test without worrying about the distribution of the variables or they can also be utilized with the traditional procedures in order to reinforce their results. Chapter 4 also deals with the comparison procedure of alternative prediction models, whereas the research interest focuses on the graphical investigation of the performances. More precisely, we introduce the Regression Error Characteristic analysis, a powerful visualization tool with interesting geometrical properties, in order to validate and compare different prediction models easily, by a simple inspection of a graph. The proposed formal framework covers different aspects of the estimation process such as the calibration of the prediction methodology, the assessment of the applicability of the estimation method to a specific dataset, the identification…
Subjects/Keywords: Τεχνολογία λογισμικού; Στατιστικές μέθοδοι; Υπολογιστικές μέθοδοι; Εκτίμηση με αναλογίες; Εκτίμηση κόστους λογισμικού; Αναδειγματοληψία; Πρόβλεψη; Software engineering; Statistical methods; Computational methods; Estimation by analogy; Software cost estimation; Resampling; Prediction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Μήττας, . . (2009). Στατιστικές και υπολογιστικές μέθοδοι ανάπτυξης, βελτίωσης και σύγκρισης μοντέλων πρόβλεψης κόστους λογισμικού. (Thesis). Aristotle University Of Thessaloniki (AUTH); Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης (ΑΠΘ). Retrieved from http://hdl.handle.net/10442/hedi/20557
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Μήττας, Νικόλαος. “Στατιστικές και υπολογιστικές μέθοδοι ανάπτυξης, βελτίωσης και σύγκρισης μοντέλων πρόβλεψης κόστους λογισμικού.” 2009. Thesis, Aristotle University Of Thessaloniki (AUTH); Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης (ΑΠΘ). Accessed January 23, 2021.
http://hdl.handle.net/10442/hedi/20557.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Μήττας, Νικόλαος. “Στατιστικές και υπολογιστικές μέθοδοι ανάπτυξης, βελτίωσης και σύγκρισης μοντέλων πρόβλεψης κόστους λογισμικού.” 2009. Web. 23 Jan 2021.
Vancouver:
Μήττας . Στατιστικές και υπολογιστικές μέθοδοι ανάπτυξης, βελτίωσης και σύγκρισης μοντέλων πρόβλεψης κόστους λογισμικού. [Internet] [Thesis]. Aristotle University Of Thessaloniki (AUTH); Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης (ΑΠΘ); 2009. [cited 2021 Jan 23].
Available from: http://hdl.handle.net/10442/hedi/20557.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Μήττας . Στατιστικές και υπολογιστικές μέθοδοι ανάπτυξης, βελτίωσης και σύγκρισης μοντέλων πρόβλεψης κόστους λογισμικού. [Thesis]. Aristotle University Of Thessaloniki (AUTH); Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης (ΑΠΘ); 2009. Available from: http://hdl.handle.net/10442/hedi/20557
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
29.
ZHENG, WEIBO.
Pore-Scale Simulation of Cathode Catalyst Layers in Proton
Exchange Membrane Fuel Cells (PEMFCs).
Degree: PhD, Mechanical Engineering, 2019, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1555436163992345
► Understanding the complex phenomena occurring inside the catalyst layer of a proton exchange membrane fuel cell (PEMFC) is critical to design of an optimized structure…
(more)
▼ Understanding the complex phenomena occurring inside
the catalyst layer of a proton exchange membrane fuel cell (PEMFC)
is critical to design of an optimized structure with low platinum
loading and high performance. Describing detailed physical and
chemical processes in the catalyst layer at the resolution of pore
scale, pore-scale simulation is considered as a promising approach
for use in understanding the structure-performance relation and
subsequent optimization of the catalyst layer. For wide spread use
in industry, the
computational cost of pore-scale simulation needs
to be reduced. To achieve this goal, a multiscale decomposition
method that accelerates the convergence of an iteratively-solved
variable distribution in porous electrodes is proposed. The
multiscale method combines the macroscopic method with pore-scale
simulation by decomposing a variable distribution into the
macroscopic component and local fluctuations. The decomposition
removes the slowly converged, long wavelength components in an
iteratively-solved variable distribution, thereby accelerating the
convergence.In this research, to reduce the
computational cost of
multiphase pore-scale simulation, the multiscale method is applied
to the electrolyte phase potential and oxygen concentration, both
of which converge slowly and limit the overall
computational
efficiency. The results show that the multiscale method can
substantially accelerate the convergence without sacrificing the
accuracy. It is also found that the estimation of the effective
transport property appearing in the volume-averaged part of the
multiscale method influences the convergence rate of the multiscale
method. With more accurate estimation of an effective transport
property, the multiscale method is shown to work more effectively,
especially for a thick porous electrode. Being an important
parameter in the application to oxygen concentration, the effective
oxygen diffusivity in pores is systematically investigated using
pore-scale simulation, and empirical correlations for use in the
multiscale method, as well as other macroscopic simulation methods,
are obtained. The emphasis is placed on the importance of Knudsen
diffusion in nanoscale pores in the catalyst layer. The results
also highlight the importance of liquid water distribution on the
effective diffusivity estimation and, therefore, on the
computational efficiency of the multiscale method. With reduced
computational cost, multiphase pore-scale simulation of a catalyst
layer used in a laboratory experiment is successfully performed.
The proposed multiscale decomposition method can be extended to
pore-scale simulation for any porous electrodes.
Advisors/Committee Members: Kim, Seung Hyun (Advisor).
Subjects/Keywords: Mechanical Engineering; multiscale method; proton exchange membrane fuel cell; catalyst layer; pore-scale simulation; multiphase flow; computational cost; effective transport property; lattice Boltzmann method; porous media
…catalyst layers is its high computational cost. This study is aimed at reducing the
computational… …e.g. the Bruggeman correlation [40, 41]. With relatively low computational cost… …different length scales, which is evaluated with reduced computational cost. Fathi
et al. [96… …incorporating comprehensive physical and
chemical phenomena. The computational cost of pore-scale… …effort to directly reduce the computational cost of pore scale simulation has not
been reported…
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
ZHENG, W. (2019). Pore-Scale Simulation of Cathode Catalyst Layers in Proton
Exchange Membrane Fuel Cells (PEMFCs). (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1555436163992345
Chicago Manual of Style (16th Edition):
ZHENG, WEIBO. “Pore-Scale Simulation of Cathode Catalyst Layers in Proton
Exchange Membrane Fuel Cells (PEMFCs).” 2019. Doctoral Dissertation, The Ohio State University. Accessed January 23, 2021.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1555436163992345.
MLA Handbook (7th Edition):
ZHENG, WEIBO. “Pore-Scale Simulation of Cathode Catalyst Layers in Proton
Exchange Membrane Fuel Cells (PEMFCs).” 2019. Web. 23 Jan 2021.
Vancouver:
ZHENG W. Pore-Scale Simulation of Cathode Catalyst Layers in Proton
Exchange Membrane Fuel Cells (PEMFCs). [Internet] [Doctoral dissertation]. The Ohio State University; 2019. [cited 2021 Jan 23].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1555436163992345.
Council of Science Editors:
ZHENG W. Pore-Scale Simulation of Cathode Catalyst Layers in Proton
Exchange Membrane Fuel Cells (PEMFCs). [Doctoral Dissertation]. The Ohio State University; 2019. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1555436163992345
◁ [1] [2] ▶
.