You searched for +publisher:"Colorado State University" +contributor:("Whitley, L. Darrell")
.
Showing records 1 – 9 of
9 total matches.
No search limiters apply to these results.

Colorado State University
1.
Hains, Douglas R.
Generalized partition crossover for the traveling salesman problem.
Degree: MS(M.S.), Computer Science, 2011, Colorado State University
URL: http://hdl.handle.net/10217/47314
► The Traveling Salesman Problem (TSP) is a well-studied combinatorial optimization problem with a wide spectrum of applications and theoretical value. We have designed a new…
(more)
▼ The Traveling Salesman Problem (TSP) is a well-studied combinatorial optimization problem with a wide spectrum of applications and theoretical value. We have designed a new recombination operator known as Generalized Partition Crossover (GPX) for the TSP. GPX is unique among other recombination operators for the TSP in that recombining two local optima produces new local optima with a high probability. Thus the operator can 'tunnel' between local optima without the need for intermediary solutions. The operator is respectful, meaning that any edges common between the two parent solutions are present in the offspring, and transmits alleles, meaning that offspring are comprised only of edges found in the parent solutions. We design a hybrid genetic algorithm, which uses local search in addition to recombination and selection, specifically for GPX. We show that this algorithm outperforms Chained Lin-Kernighan, a
state-of-the-art approximation algorithm for the TSP. We next analyze these algorithms to determine why the algorithms are not capable of consistently finding a globally optimal solution. Our results reveal a search space structure which we call 'funnels' because they are analogous to the funnels found in continuous optimization. Funnels are clusters of tours in the search space that are separated from one another by a non-trivial distance. We find that funnels can trap Chained Lin-Kernighan, preventing the search from finding an optimal solution. Our data indicate that, under certain conditions, GPX can tunnel between funnels, explaining the higher frequency of optimal solutions produced by our hybrid genetic algorithm using GPX.
Advisors/Committee Members: Whitley, L. Darrell (advisor), Howe, Adele E. (committee member), Mueller, Jennifer L. (committee member).
Subjects/Keywords: genetic algorithms; Traveling Salesman Problem; search space; local search
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hains, D. R. (2011). Generalized partition crossover for the traveling salesman problem. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/47314
Chicago Manual of Style (16th Edition):
Hains, Douglas R. “Generalized partition crossover for the traveling salesman problem.” 2011. Masters Thesis, Colorado State University. Accessed January 22, 2021.
http://hdl.handle.net/10217/47314.
MLA Handbook (7th Edition):
Hains, Douglas R. “Generalized partition crossover for the traveling salesman problem.” 2011. Web. 22 Jan 2021.
Vancouver:
Hains DR. Generalized partition crossover for the traveling salesman problem. [Internet] [Masters thesis]. Colorado State University; 2011. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10217/47314.
Council of Science Editors:
Hains DR. Generalized partition crossover for the traveling salesman problem. [Masters Thesis]. Colorado State University; 2011. Available from: http://hdl.handle.net/10217/47314

Colorado State University
2.
Chen, Wenxiang.
Step toward constant time local search for optimizing pseudo boolean functions, A.
Degree: MS(M.S.), Computer Science, 2013, Colorado State University
URL: http://hdl.handle.net/10217/81007
► Pseudo Boolean Functions (PBFs) are the objective functions for a wide class of hard optimization problems, such as MAX-SAT and MAX-CUT. Since these problems are…
(more)
▼ Pseudo Boolean Functions (PBFs) are the objective functions for a wide class of hard optimization problems, such as MAX-SAT and MAX-CUT. Since these problems are NP-Hard, researchers and practitioners rely on incomplete solvers, such as Stochastic Local Search (SLS), for large problems. Best-Improvement Local Search (BILS) is a common form of SLS, which always takes the move yielding the highest improvement in the objective function. Generally, the more runtime SLS is given, the better solution can be obtained. This thesis aims at algorithmically accelerating SLS for PBFs using Walsh Analysis. The contributions of this thesis are threefold. First, a general approach for executing an approximate best-improvement move in constant time on average using Walsh analysis, "Walsh-LS", is described. Conventional BILS typically requires examining all n neighbors to decide which move to take, given the number of variables is n. With Walsh analysis, however, we can determine which neighbors need to be checked. As long as the objective function is epistatically bounded by a constant k (k is the number of variables per subfunctions), the number of neighbors that need to be checked is constant regardless of problem size. An impressive speedup of runtime (up to 449 times) is observed in our empirical studies. Second, in the context of Walsh-LS, we empirically study two key components of SLS from the perspectives of both efficiency and effectiveness: 1) Local optimum escape method: hard random or soft restarts; 2) Local search strategy: first-improvement or best-improvement. Lastly, on average we can perform approximate BILS using the mean over a Hamming region of arbitrary radius as a surrogate objective function. Even though the number of points is exponential in the radius of the Hamming region, BILS using the mean value of points in the Hamming region as a surrogate objective function can still take each move in time independent of n on average. According to our empirical studies, using the average over a Hamming region as a surrogate objective function can yield superior performance results on neutral landscapes like NKq-landscapes.
Advisors/Committee Members: Whitley, L. Darrell (advisor), Howe, Adele E. (advisor), Cheney, Margaret (committee member).
Subjects/Keywords: complexity per move; stochastic local search; pseudo Boolean optimization; NK-landscapes
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chen, W. (2013). Step toward constant time local search for optimizing pseudo boolean functions, A. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/81007
Chicago Manual of Style (16th Edition):
Chen, Wenxiang. “Step toward constant time local search for optimizing pseudo boolean functions, A.” 2013. Masters Thesis, Colorado State University. Accessed January 22, 2021.
http://hdl.handle.net/10217/81007.
MLA Handbook (7th Edition):
Chen, Wenxiang. “Step toward constant time local search for optimizing pseudo boolean functions, A.” 2013. Web. 22 Jan 2021.
Vancouver:
Chen W. Step toward constant time local search for optimizing pseudo boolean functions, A. [Internet] [Masters thesis]. Colorado State University; 2013. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10217/81007.
Council of Science Editors:
Chen W. Step toward constant time local search for optimizing pseudo boolean functions, A. [Masters Thesis]. Colorado State University; 2013. Available from: http://hdl.handle.net/10217/81007

Colorado State University
3.
Sutton, Andrew M.
Analysis of combinatorial search spaces for a class of NP-hard problems, An.
Degree: PhD, Computer Science, 2011, Colorado State University
URL: http://hdl.handle.net/10217/50161
► Given a finite but very large set of states X and a real-valued objective function ƒ defined on X, combinatorial optimization refers to the problem…
(more)
▼ Given a finite but very large set of states X and a real-valued objective function ƒ defined on X, combinatorial optimization refers to the problem of finding elements of X that maximize (or minimize) ƒ. Many combinatorial search algorithms employ some perturbation operator to hill-climb in the search space. Such perturbative local search algorithms are
state of the art for many classes of NP-hard combinatorial optimization problems such as maximum k-satisfiability, scheduling, and problems of graph theory. In this thesis we analyze combinatorial search spaces by expanding the objective function into a (sparse) series of basis functions. While most analyses of the distribution of function values in the search space must rely on empirical sampling, the basis function expansion allows us to directly study the distribution of function values across regions of states for combinatorial problems without the need for sampling. We concentrate on objective functions that can be expressed as bounded pseudo-Boolean functions which are NP-hard to solve in general. We use the basis expansion to construct a polynomial-time algorithm for exactly computing constant-degree moments of the objective function ƒ over arbitrarily large regions of the search space. On functions with restricted codomains, these moments are related to the true distribution by a system of linear equations. Given low moments supplied by our algorithm, we construct bounds of the true distribution of ƒ over regions of the space using a linear programming approach. A straightforward relaxation allows us to efficiently approximate the distribution and hence quickly estimate the count of states in a given region that have certain values under the objective function. The analysis is also useful for characterizing properties of specific combinatorial problems. For instance, by connecting search space analysis to the theory of inapproximability, we prove that the bound specified by Grover's maximum principle for the Max-Ek-Lin-2 problem is sharp. Moreover, we use the framework to prove certain configurations are forbidden in regions of the Max-3-Sat search space, supplying the first theoretical confirmation of empirical results by others. Finally, we show that theoretical results can be used to drive the design of algorithms in a principled manner by using the search space analysis developed in this thesis in algorithmic applications. First, information obtained from our moment retrieving algorithm can be used to direct a hill-climbing search across plateaus in the Max-k-Sat search space. Second, the analysis can be used to control the mutation rate on a (1+1) evolutionary algorithm on bounded pseudo-Boolean functions so that the offspring of each search point is maximized in expectation. For these applications, knowledge of the search space structure supplied by the analysis translates to significant gains in the performance of search.
Advisors/Committee Members: Whitley, L. Darrell (advisor), Howe, Adele E. (advisor), Chong, Edwin K. P. (committee member), Bohm, A. P. Willem (committee member).
Subjects/Keywords: combinatorial optimization; combinatorial search; local search; pseudo-Boolean functions; search space analysis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sutton, A. M. (2011). Analysis of combinatorial search spaces for a class of NP-hard problems, An. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/50161
Chicago Manual of Style (16th Edition):
Sutton, Andrew M. “Analysis of combinatorial search spaces for a class of NP-hard problems, An.” 2011. Doctoral Dissertation, Colorado State University. Accessed January 22, 2021.
http://hdl.handle.net/10217/50161.
MLA Handbook (7th Edition):
Sutton, Andrew M. “Analysis of combinatorial search spaces for a class of NP-hard problems, An.” 2011. Web. 22 Jan 2021.
Vancouver:
Sutton AM. Analysis of combinatorial search spaces for a class of NP-hard problems, An. [Internet] [Doctoral dissertation]. Colorado State University; 2011. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10217/50161.
Council of Science Editors:
Sutton AM. Analysis of combinatorial search spaces for a class of NP-hard problems, An. [Doctoral Dissertation]. Colorado State University; 2011. Available from: http://hdl.handle.net/10217/50161

Colorado State University
4.
Chen, Wenxiang.
Discovering and harnessing structures in solving application satisfiability instances.
Degree: PhD, Computer Science, 2018, Colorado State University
URL: http://hdl.handle.net/10217/189292
► Boolean satisfiability (SAT) is the first problem proven to be NP-Complete. It has become a fundamental problem for computational complexity theory, and many real-world problems…
(more)
▼ Boolean satisfiability (SAT) is the first problem proven to be NP-Complete. It has become a fundamental problem for computational complexity theory, and many real-world problems can be encoded as SAT instances. Two major search paradigms have been proposed for SAT solving: Systematic Search (SS) and Stochastic Local Search (SLS). SLS solvers have been shown to be very effective at uniform random instances; SLS solvers are consistently the top winning entries for random tracks at SAT competitions. However, SS solvers dominate hard combinatorial tracks and industrial tracks at SAT competitions, with SLS entries being at the very bottom of the ranking. In this work, we classify both hard combinatorial instances and industrial instances as Application Instances. As application instances are more interesting from a practical perspective, it is critical to analyze the structures in application instances as well as to improve SLS on application instances. We focus on two structural properties of SAT instances in this work: variable interaction topology and subproblem constrainedness. Decomposability focuses on how well the variable interaction of an application instance can be decomposed. We first show that many application instances are indeed highly decomposable. The decomposability of a SAT instance have been extensively exploited with success by SS solvers. Meanwhile, SLS solvers direct the variables to flip using only the objective function, and are completely oblivious of the decomposability of application instances that is inherent to the original problem domain. We propose a new method to decompose variable interactions within SLS solvers, leveraging numerous visited local optima. Our empirical study suggests that the proposed method can vastly simplify SAT instances, which further results in decomposing the instances into thousands of connected components. Furthermore, we demonstrate the utility of the decomposition, in improving SLS solvers. We propose a new framework called PXSAT, based on the recombination operator Partition Crossover (PX). Given q components, PX is able to find the best of 2q possible candidate solutions in linear time. Empirical results on an extensive set of application instances show PXSAT can yield statistically significantly better results. We improve two of best local search solvers, AdaptG2WSAT and Sparrow. PXSAT combined with AdaptG2WSAT is also able to outperform CCLS, winners of several recent MAXSAT competitions. The other structural property we study is subproblem constrainedness. We observe that, on some application SAT instance classes, the original problem can be partitioned into several subproblems, where each subproblems is highly constrained. While subproblem constrainedness has been exploited in SS solvers before, we propose to exploit it in SLS solvers using two alternative representations that can be obtained efficiently based on the canonical CNF representation. Our empirical results show that the new alternative representative enables a simple SLS solver to outperform…
Advisors/Committee Members: Whitley, L. Darrell (advisor), Draper, Bruce A. (committee member), Böhm, A. P. Wim (committee member), Chong, Edwin K. P. (committee member).
Subjects/Keywords: MAXSAT; SAT; optimization; local search
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chen, W. (2018). Discovering and harnessing structures in solving application satisfiability instances. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/189292
Chicago Manual of Style (16th Edition):
Chen, Wenxiang. “Discovering and harnessing structures in solving application satisfiability instances.” 2018. Doctoral Dissertation, Colorado State University. Accessed January 22, 2021.
http://hdl.handle.net/10217/189292.
MLA Handbook (7th Edition):
Chen, Wenxiang. “Discovering and harnessing structures in solving application satisfiability instances.” 2018. Web. 22 Jan 2021.
Vancouver:
Chen W. Discovering and harnessing structures in solving application satisfiability instances. [Internet] [Doctoral dissertation]. Colorado State University; 2018. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10217/189292.
Council of Science Editors:
Chen W. Discovering and harnessing structures in solving application satisfiability instances. [Doctoral Dissertation]. Colorado State University; 2018. Available from: http://hdl.handle.net/10217/189292

Colorado State University
5.
Whyman, Paul Arthur.
Automatic endpoint vulnerability detection of Linux and open source using the National Vulnerability Database.
Degree: MS(M.S.), Computer Science, 2008, Colorado State University
URL: http://hdl.handle.net/10217/80811
► A means to reduce security risks to a network of computers is to manage which computers can participate on a network, and control the participation…
(more)
▼ A means to reduce security risks to a network of computers is to manage which computers can participate on a network, and control the participation of systems that do not conform to the security policy. Requiring systems to demonstrate their compliance to the policy can limit the risk of allowing uncompiling systems access to trusted networks. One aspect of determining the risk a system represents is patch-level, a comparison between the availability of vendor security patches and their application on a system. A fully updated system has all available patches applied. Using patch level as a security policy metric, systems can evaluate as compliant, yet may still contain known vulnerabilities, representing real risks of exploitation. An alternative approach is a direct comparison of system software to public vulnerability reports contained in the National Vulnerability Database (NVD). This approach may produce a more accurate assessment of system risk for several reasons including removing the delay caused by vendor patch development and by analyzing system risk using vender-independent vulnerability information. This work demonstrates empirically that current, fully patched systems contain numerous software vulnerabilities. This technique can apply to platforms other than those of Open Source origin. This alternative method, which compares system software components to lists of known software vulnerabilities, must reliably match system components to those listed as vulnerable. This match requires a precise identification of both the vulnerability and the software that the vulnerability affects. In the process of this analysis, significant issues arose within the NVD pertaining to the presentation of Open Source vulnerability information. Direct matching is not possible using the current information in the NVD. Furthermore, these issues support the belief that the NVD is not an accurate data source for popular statistical comparisons between closed and open source software.
Advisors/Committee Members: Ray, Indrajit (advisor), Krawetz, Neal (committee member), Whitley, L. Darrell (committee member), Hayne, Stephen (committee member).
Subjects/Keywords: Open source software; Computer security
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Whyman, P. A. (2008). Automatic endpoint vulnerability detection of Linux and open source using the National Vulnerability Database. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/80811
Chicago Manual of Style (16th Edition):
Whyman, Paul Arthur. “Automatic endpoint vulnerability detection of Linux and open source using the National Vulnerability Database.” 2008. Masters Thesis, Colorado State University. Accessed January 22, 2021.
http://hdl.handle.net/10217/80811.
MLA Handbook (7th Edition):
Whyman, Paul Arthur. “Automatic endpoint vulnerability detection of Linux and open source using the National Vulnerability Database.” 2008. Web. 22 Jan 2021.
Vancouver:
Whyman PA. Automatic endpoint vulnerability detection of Linux and open source using the National Vulnerability Database. [Internet] [Masters thesis]. Colorado State University; 2008. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10217/80811.
Council of Science Editors:
Whyman PA. Automatic endpoint vulnerability detection of Linux and open source using the National Vulnerability Database. [Masters Thesis]. Colorado State University; 2008. Available from: http://hdl.handle.net/10217/80811

Colorado State University
6.
Dewri, Rinku.
Multi-criteria analysis in modern information management.
Degree: PhD, Computer Science, 2010, Colorado State University
URL: http://hdl.handle.net/10217/40284
► The past few years have witnessed an overwhelming amount of research in the field of information security and privacy. An encouraging outcome of this research…
(more)
▼ The past few years have witnessed an overwhelming amount of research in the field of information security and privacy. An encouraging outcome of this research is the vast accumulation of theoretical models that help to capture the various threats that persistently hinder the best possible usage of today's powerful communication infrastructure. While theoretical models are essential to understanding the impact of any breakdown in the infrastructure, they are of limited application if the underlying business centric view is ignored. Information management in this context is the strategic management of the infrastructure, incorporating the knowledge about causes and consequences to arrive at the right balance between risk and profit. Modern information management systems are home to a vast repository of sensitive personal information. While these systems depend on quality data to boost the Quality of Service (QoS), they also run the risk of violating privacy regulations. The presence of network vulnerabilities also weaken these systems since security policies cannot always be enforced to prevent all forms of exploitation. This problem is more strongly grounded in the insufficient availability of resources, rather than the inability to predict zero-day attacks. System resources also impact the availability of access to information, which in itself is becoming more and more ubiquitous day by day. Information access times in such ubiquitous environments must be maintained within a specified QoS level. In short, modern information management must consider the mutual interactions between risks, resources and services to achieve wide scale acceptance. This dissertation explores these problems in the context of three important domains, namely disclosure control, security risk management and wireless data broadcasting. Research in these domains has been put together under the umbrella of multi-criteria decision making to signify that "business survival" is an equally important factor to consider while analyzing risks and providing solutions for their resolution. We emphasize that businesses are always bound by constraints in their effort to mitigate risks and therefore benefit the most from a framework that allows the exploration of solutions that abide by the constraints. Towards this end, we revisit the optimization problems being solved in these domains and argue that they oversee the underlying cost-benefit relationship. Our approach in this work is motivated by the inherent multi-objective nature of the problems. We propose formulations that help expose the cost-benefit relationship across the different objectives that must be met in these problems. Such an analysis provides a decision maker with the necessary information to make an informed decision on the impact of choosing a control measure over the business goals of an organization. The theories and tools necessary to perform this analysis are introduced to the community.
Advisors/Committee Members: Whitley, L. Darrell (advisor), Ray, Indrajit, 1966- (advisor), Ray, Indrakshi (committee member), Siegel, Howard Jay (committee member).
Subjects/Keywords: Information management; data broadcasting; multi-objective optimization; information security and privacy; Computer networks – Security measures; Computer security; Multiple criteria decision making
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Dewri, R. (2010). Multi-criteria analysis in modern information management. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/40284
Chicago Manual of Style (16th Edition):
Dewri, Rinku. “Multi-criteria analysis in modern information management.” 2010. Doctoral Dissertation, Colorado State University. Accessed January 22, 2021.
http://hdl.handle.net/10217/40284.
MLA Handbook (7th Edition):
Dewri, Rinku. “Multi-criteria analysis in modern information management.” 2010. Web. 22 Jan 2021.
Vancouver:
Dewri R. Multi-criteria analysis in modern information management. [Internet] [Doctoral dissertation]. Colorado State University; 2010. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10217/40284.
Council of Science Editors:
Dewri R. Multi-criteria analysis in modern information management. [Doctoral Dissertation]. Colorado State University; 2010. Available from: http://hdl.handle.net/10217/40284

Colorado State University
7.
Watson, Jean-Paul.
Empirical modeling and analysis of local search algorithms for the job-shop scheduling problem.
Degree: PhD, Computer Science, 2003, Colorado State University
URL: http://hdl.handle.net/10217/26811
► Local search algorithms are among the most effective approaches for locating high-quality solutions to a wide range of combinatorial optimization problems. However, our theoretical understanding…
(more)
▼ Local search algorithms are among the most effective approaches for locating high-quality solutions to a wide range of combinatorial optimization problems. However, our theoretical understanding of these algorithms is very limited, leading to significant problems for both researchers and practitioners. Specifically, the lack of a theory of local search impedes the development of more effective algorithms, prevents practitioners from identifying the algorithm most appropriate for a given problem, and allows widespread conjecture and misinformation regarding the benefits and/or drawbacks of particular algorithms. This thesis represents a significant step toward a theory of local search. Using empirical methods, we develop theoretical models of the behavior of four well-known local search algorithms: a random walk, tabu search, iterated local search, and simulated annealing. The analysis proceeds in the context of the well-known job-shop scheduling problem, one of the most difficult NP-hard problems encountered in practice. The large volume of prior research on the job-shop scheduling problem provides a diverse range of available algorithms and problem instances, in addition to numerous empirical observations regarding local search algorithm behavior; the latter are used to validate our behavioral models. We show that all four local search algorithms can be modeled with high fidelity using straightforward variations of a generalized one-dimensional Markov chain. The states in these models represent sets of solutions a given fixed distance from the nearest optimal solution. The transition probabilities in all of the models are remarkably similar, in that search is consistently biased toward solutions that are roughly equidistant from the nearest optimal solution and solutions that are maximally distant from the nearest optimal solution. Surprisingly, the qualitative form of the transition probabilities is simply due to the structure of the representation used to encode solutions: the binary hypercube. The models account for between 96% and 99% of the variability in the cost required to locate both optimal and sub-optimal solutions to a wide range of problem instances, and provide explanations for numerous phenomena related to problem difficulty for local search in the job-shop scheduling problem. In the course of our analysis, we also disprove many conjectures regarding the behavior and benefits of particular algorithms. Our research indicates that despite their effectiveness, local search algorithms for the job-shop scheduling problem exhibit surprisingly simple run-time dynamics. Further, we observe minimal differences between the dynamical behavior of different algorithms. As expected given similar run-time dynamics, although contrary to numerous reports appearing in the literature, we also show that the performance of different algorithms is largely indistinguishable. Ultimately, our behavioral models serve to unify and provide explanations for a large body of observations regarding problem difficulty for local…
Advisors/Committee Members: Howe, Adele E. (advisor), Whitley, L. Darrell (advisor), Chong, Edwin Kah Pin (committee member), Bohm, Anton Willem (committee member).
Subjects/Keywords: random walk; tabu search; job-shop scheduling problem; empirical methods; local search algorithms; iterated local search; simulated annealing; Production scheduling – Computer simulation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Watson, J. (2003). Empirical modeling and analysis of local search algorithms for the job-shop scheduling problem. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/26811
Chicago Manual of Style (16th Edition):
Watson, Jean-Paul. “Empirical modeling and analysis of local search algorithms for the job-shop scheduling problem.” 2003. Doctoral Dissertation, Colorado State University. Accessed January 22, 2021.
http://hdl.handle.net/10217/26811.
MLA Handbook (7th Edition):
Watson, Jean-Paul. “Empirical modeling and analysis of local search algorithms for the job-shop scheduling problem.” 2003. Web. 22 Jan 2021.
Vancouver:
Watson J. Empirical modeling and analysis of local search algorithms for the job-shop scheduling problem. [Internet] [Doctoral dissertation]. Colorado State University; 2003. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10217/26811.
Council of Science Editors:
Watson J. Empirical modeling and analysis of local search algorithms for the job-shop scheduling problem. [Doctoral Dissertation]. Colorado State University; 2003. Available from: http://hdl.handle.net/10217/26811

Colorado State University
8.
Kretchmar, R. Matthew.
Synthesis of reinforcement learning and robust control theory, A.
Degree: PhD, Computer Science, 2000, Colorado State University
URL: http://hdl.handle.net/10217/26305
► The pursuit of control algorithms with improved performance drives the entire control research community as well as large parts of the mathematics, engineering, and artificial…
(more)
▼ The pursuit of control algorithms with improved performance drives the entire control research community as well as large parts of the mathematics, engineering, and artificial intelligence research communities. A fundamental limitation on achieving control performance is the conflicting requirement of maintaining system stability. In general, the more aggressive is the controller, the better the control performance but also the closer to system instability. Robust control is a collection of theories, techniques, the tools that form one of the leading edge approaches to control. Most controllers are designed not on the physical plant to be controlled, but on a mathematical model of the plant; hence, these controllers often do not perform well on the physical plant and are sometimes unstable. Robust control overcomes this problem by adding uncertainty to the mathematical model. The result is a more general, less aggressive controller which performs well on the both the model and the physical plant. However, the robust control method also sacrifices some control performance in order to achieve its guarantees of stability. Reinforcement learning based neural networks offer some distinct advantages for improving control performance. Their nonlinearity enables the neural network to implement a wider range of control functions, and their adaptability permits them to improve control performance via on-line, trial-and-error learning. However, neuro-control is typically plagued by a lack of stability guarantees. Even momentary instability cannot be tolerated in most physical plants, and thus, the threat of instability prohibits the application of neuro-control in many situations. In this dissertation, we develop a stable neuro-control scheme by synthesizing the two fields of reinforcement learning and robust control theory. We provide a learning system with many of the advantages of neuro-control. Using functional uncertainty to represent the nonlinear and time-varying components of the neuro networks, we apply the robust control techniques to guarantee the stability of our neuro-controller. Our scheme provides stable control not only for a specific fixed-weight, neural network, but also for a neuro-controller in which the weights are changing during learning. Furthermore, we apply our stable neuro-controller to several control tasks to demonstrate that the theoretical stability guarantee is readily applicable to real-life control situations. We also discuss several problems we encounter and identify potential avenues of future research.
Advisors/Committee Members: Anderson, Charles (advisor), Howe, Adele E. (committee member), Whitley, L. Darrell (committee member), Young, Peter M. (committee member), Hittle, Douglas C. (committee member).
Subjects/Keywords: robust control theory; neuro-control scheme; neuro-controller; neural networks; Reinforcement learning; Control theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kretchmar, R. M. (2000). Synthesis of reinforcement learning and robust control theory, A. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/26305
Chicago Manual of Style (16th Edition):
Kretchmar, R Matthew. “Synthesis of reinforcement learning and robust control theory, A.” 2000. Doctoral Dissertation, Colorado State University. Accessed January 22, 2021.
http://hdl.handle.net/10217/26305.
MLA Handbook (7th Edition):
Kretchmar, R Matthew. “Synthesis of reinforcement learning and robust control theory, A.” 2000. Web. 22 Jan 2021.
Vancouver:
Kretchmar RM. Synthesis of reinforcement learning and robust control theory, A. [Internet] [Doctoral dissertation]. Colorado State University; 2000. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10217/26305.
Council of Science Editors:
Kretchmar RM. Synthesis of reinforcement learning and robust control theory, A. [Doctoral Dissertation]. Colorado State University; 2000. Available from: http://hdl.handle.net/10217/26305

Colorado State University
9.
Lui, Yui Man.
Geometric methods on special manifolds for visual recognition.
Degree: PhD, Computer Science, 2010, Colorado State University
URL: http://hdl.handle.net/10217/39042
► Many computer vision methods assume that the underlying geometry of images is Euclidean. This assumption is generally not valid. Therefore, this dissertation introduces new nonlinear…
(more)
▼ Many computer vision methods assume that the underlying geometry of images is Euclidean. This assumption is generally not valid. Therefore, this dissertation introduces new nonlinear geometric frameworks based upon special manifolds, namely Graβmann and Stiefel manifolds, for visual recognition. The motivation for this thesis is driven by the intrinsic geometry of visual data in which the visual data can be either a still image or video. Visual data are represented as points in appropriately chosen parameter spaces. The idiosyncratic aspects of the data in these spaces are then exploited for pattern classification. Three major research results are presented in this dissertation: face recognition for illumination spaces on Stiefel manifolds, face recognition on Graβmann registration manifolds, and action classification on product manifolds. Previous work has shown that illumination cones are idiosyncratic for face recognition in illumination spaces. However, it has not been addressed how a single image relates to an illumination cone. In this dissertation, a Bayesian model is employed to relight a single image to a set of illuminated variants. The subspace formed by these illuminated variants is characterized on a Stiefel manifold. A new distance measure called Canonical Stiefel Quotient (CSQ) is introduced. CSQ performs two projections on a tangent space of a Stiefel manifold and uses the quotient for classification. The proposed method demonstrates that illumination cones can be synthesized by relighting a single image to a set of images, and the synthesized illumination cones are discriminative for face recognition. Experiments on the CMU-PIE and YaleB data sets reveal that CSQ not only achieves high recognition accuracies for generic faces but also is robust to the choice of training sets. Subspaces can be realized as points on Graβmann manifolds. Motivated by image perturbation and the geometry of Graβmann manifolds, we present a method called Graβmann Registration Manifolds (GRM) for face recognition. First, a tangent space is formed by a set of affine perturbed images where the tangent space admits a vector space structure. Second, the tangent spaces are embedded on a Graβmann manifold and chordal distance is used to compare subspaces. Experiments on the FERET database suggest that the proposed method yields excellent results using both holistic and local features. Specifically, on the FERET Dup2 data set, which is generally considered the most difficult data set on FERET, the proposed method achieves the highest rank one identification rate among all non-trained methods currently in the literature. Human actions compose a series of movements and can be described by a sequence of video frames. Since videos are multidimensional data, data tensors are the natural choice for data representation. In this dissertation, a data tensor is expressed as a point on a product manifold and classification is performed on this product space. First, we factorize a data tensor using a modified High Order Singular Value…
Advisors/Committee Members: Beveridge, J. Ross (advisor), Kirby, Michael, 1961- (committee member), Draper, Bruce A. (Bruce Austin), 1962- (committee member), Whitley, L. Darrell (committee member).
Subjects/Keywords: action classification; visual recognition; special manifolds; geometric methods; face recognition; Human face recognition (Computer science); Grassmann manifolds; Stiefel manifolds
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lui, Y. M. (2010). Geometric methods on special manifolds for visual recognition. (Doctoral Dissertation). Colorado State University. Retrieved from http://hdl.handle.net/10217/39042
Chicago Manual of Style (16th Edition):
Lui, Yui Man. “Geometric methods on special manifolds for visual recognition.” 2010. Doctoral Dissertation, Colorado State University. Accessed January 22, 2021.
http://hdl.handle.net/10217/39042.
MLA Handbook (7th Edition):
Lui, Yui Man. “Geometric methods on special manifolds for visual recognition.” 2010. Web. 22 Jan 2021.
Vancouver:
Lui YM. Geometric methods on special manifolds for visual recognition. [Internet] [Doctoral dissertation]. Colorado State University; 2010. [cited 2021 Jan 22].
Available from: http://hdl.handle.net/10217/39042.
Council of Science Editors:
Lui YM. Geometric methods on special manifolds for visual recognition. [Doctoral Dissertation]. Colorado State University; 2010. Available from: http://hdl.handle.net/10217/39042
.