Theses and Dissertations at Montana State University (MSU)

Permanent URI for this collectionhttps://scholarworks.montana.edu/handle/1/733

Browse

Search Results

Now showing 1 - 10 of 14
  • Thumbnail Image
    Item
    Factored evolutionary algorithms: cooperative coevolutionary optimization with overlap
    (Montana State University - Bozeman, College of Engineering, 2017) Strasser, Shane Tyler; Chairperson, Graduate Committee: John Sheppard
    Factored Evolutionary Algorithms (FEA) define a relatively new class of evolutionary-based optimization algorithms that have been successfully applied to various problems, such as training neural networks and performing abductive inference in graphical models. FEA is unique in that it factors the function being optimized by creating subpopulations that optimize over a subset of dimensions of the function. However, unlike other optimization techniques that subdivide optimization problems, FEA encourages subpopulations to overlap with one another, allowing subpopulations to compete and share information. Although FEA has been shown to be very effective at function optimization, there is still little understanding with respect to its general characteristics. In this dissertation, we present seven results exploring the theoretical and empirical properties of FEA. First, we present a formal definition of FEA and demonstrate its relationships to other multiple population algorithms. Second, we demonstrate that FEA's success is independent of the underlying optimization algorithm by evaluating the performance of FEA using a wide variety of evolutionary- and swarm-based algorithms over single-population and non-overlapping versions. Third, we demonstrate that for a given problem, there is an optimal way to generate groups of overlapping subpopulations derived using the Markov blanket in Bayesian networks. Fourth, we establish that a class of optimization functions like NK landscapes can be mapped directly to probabilistic graphical models. Additionally, we demonstrate that factor architectures derived from Markov blankets maintain better diversity of individuals in their population. Fifth, we present a new discrete Particle Swarm Optimization (PSO) algorithm and compare its performance to competing approaches. In addition, we analyze the performance of FEA versions of discrete PSO and discover that FEA masks the poor performance of search algorithms. We show what conditions are necessary for FEA to converge and scenarios where FEA may become stuck in suboptimal regions in the search space. Finally, we explore the performance of FEA on unitation functions and discover several instances where FEA struggles to outperform single-population algorithms. These results allow us to determine which situations are appropriate for FEA when using solving real-world problems.
  • Thumbnail Image
    Item
    Using semi-supervised learning for predicting metamorphic relations
    (Montana State University - Bozeman, College of Engineering, 2018) Hardin, Bonnie Elizabeth; Chairperson, Graduate Committee: Upulee Kanewala
    Software testing is difficult to automate, especially in programs which face the oracle problem, where an oracle does not exist, or is too hard to develop. Metamorphic testing is a solution to this problem. Metamorphic testing uses metamorphic relations to determine if tests pass or fail. A large amount of time is needed for a domain expert to determine which metamorphic relations can be used to test a given program. Metamorphic relation prediction removes this need for such an expert. We propose a method using semi-supervised learning algorithms to detect which metamorphic relations are applicable to a given code base. Semi-supervised learning is useful in this problem domain as most programs do not have pre-defined metamorphic relations. These programs are considered unlabeled data in a semi-supervised algorithm. We compare two semi-supervised models with a supervised model, and show that the addition of unlabeled data improves the classification accuracy of the metamorphic relation prediction model.
  • Thumbnail Image
    Item
    Finding disjoint dense clubs in an undirected graph
    (Montana State University - Bozeman, College of Engineering, 2016) Zou, Peng; Chairperson, Graduate Committee: Binhai Zhu
    For over a decade, software like Twitter, Facebook and WeChat have changed people's lives by creating social groups and networks easily. They give people a new convenient 'world' where we can share everything that happens around us, and social networks have grown enormously in recent years. In essence, social networks are full of data and have become an indispensable part of our life. Trust is an important feature of the relationship between two users in a social network. With the development of social networks, the trust among its members has become a big issue. In a social network, the trust among its members usually cannot be carried over many users. In the corresponding social network modeled as a graph, a user is denoted by a vertex and an edge between two vertices means that these two users communicate a lot above some threshold and they trust each other. An online social community is usually corresponding to a dense region in such a graph. A complex social network is usually composed of several groups/communities (the regions with a lot of edges), and this characterization of community structure means the appearance of densely connected groups of vertices, with only sparse connections between groups. For analyzing the structure of social networks and the relationship between users, it is important to find disjoint groups/communities with a small diameter and with a decent size, formally called dense clubs in this thesis. We focus on handling this NP-complete problem in this thesis. First, from the parameterized computational complexity point of view, we show that this problem does not admit a polynomial kernel (implying that it is unlikely to apply some reduction rules to obtain a practically small problem size). Then, we focus on the dual version of the problem, i.e., deleting 'd' vertices to obtain some disjoint dense clubs. We show that this dual problem admits a simple FPT algorithm using a bounded search tree method (the running time is still too high for practical datasets). Finally, we combine a simple reduction rule together with some heuristic methods to obtain a practical solution (verified by extensive testing on practical datasets). Empirical results show that this heuristic algorithm is very sensitive to all parameters. This algorithm is suitable on graphs which have a mixture of dense and sparse regions. These graphs are very common in the real world.
  • Thumbnail Image
    Item
    Inference and learning in Bayesian networks using overlapping swarm intelligence
    (Montana State University - Bozeman, College of Engineering, 2015) Fortier, Nathan Lee; Chairperson, Graduate Committee: John Sheppard
    While Bayesian networks provide a useful tool for reasoning under uncertainty, learning the structure of these networks and performing inference over them is NP-Hard. We propose several heuristic algorithms to address the problems of inference, structure learning, and parameter estimation in Bayesian networks. The proposed algorithms are based on Overlapping Swarm intelligence, a modification of particle swarm optimization in which a problem is broken into overlapping subproblems and a swarm is assigned to each subproblem. The algorithm maintains a global solution that is used for fitness evaluation, and is updated periodically through a competition mechanism. We describe how the problems of inference, structure learning, and parameter estimation can be broken into subproblems, and provide communication and competition mechanisms that allow swarms to share information about learned solutions. We also present a distributed alternative to Overlapping Swarm Intelligence that does not require a global network for fitness evaluation. For the problems of full and partial abductive inference, a swarm is assigned to each relevant node in the network. Each swarm learns the relevant state assignments associated with the Markov blanket for its corresponding node. In our approach to parameter estimation, a swarm is associated with each node in the network that corresponds to either a latent variable or a child of a latent variable. Each node's corresponding swarm learns the parameters associated with that node's Markov blanket. We also apply Overlapping Swarm Intelligence to several variations of the structure learning problem: learning Bayesian classifiers, learning Bayesian networks with complete data, and learning Bayesian networks with latent variables. For each problem, a swarm is associated with each node in the network. This work makes a number of contributions relating to the advancement of Overlapping Swarm Intelligence as a general optimization technique. We demonstrate the applicability of Overlapping Swarm Intelligence to both discrete and continuous optimization problems. We also examine the effect of the swarm architecture and degree of overlap on algorithm performance. The experiments presented here demonstrate that, while the sub-swarm architecture affects algorithm performance, Overlapping Swarm Intelligence continues to perform well even when there is little overlap between the swarms.
  • Thumbnail Image
    Item
    Approximating a neuron with cylindrical segments
    (Montana State University - Bozeman, College of Engineering, 2003) Lin, Wenhao
  • Thumbnail Image
    Item
    MAXPLANAR : a graphical software package for testing maximal planar subgraph algorithms
    (Montana State University - Bozeman, College of Engineering, 1996) Zhao, Kedan
  • Thumbnail Image
    Item
    An efficient implementation of a planarity testing and maximal planar subgraph algorithm
    (Montana State University - Bozeman, College of Engineering, 1996) Li, Zhongyuan
  • Thumbnail Image
    Item
    Algorithms for three-label point labeling
    (Montana State University - Bozeman, College of Engineering, 2001) Duncan, Robert Wade
  • Thumbnail Image
    Item
    Algorithmic aspects of resource allocation in cognitive radio wireless networks
    (Montana State University - Bozeman, College of Engineering, 2013) Judson, Ivan Ross; Chairperson, Graduate Committee: Brendan Mumey
    Wireless networking is a critical component of today's internet infrastructure. Two examples of important wireless internet infrastructure are long distance network backbone links and last-mile solutions to remote areas. Wireless technology already supplies a wide variety of consumer solutions including analog television channels (TVWS), cellular infrastructure for massive scale real-time communication, and computer networking for seamless global connectivity. Worldwide, there are an estimated 2.5B internet users and 6B cellular phone subscribers- and those numbers are steadily growing. Sufficient capacity for divergent wireless applications, along with their growing users, calls for a more efficient use of bandwidth. We present multiple resource allocation algorithms to address this challenge in various aspects of wireless networking. Each algorithm focuses on a single resource of wireless networking: antenna beam sector activation, directional antenna beam bearings and duration, joint routing and channel selection, and link-channel allocation. In terms of computation and memory, our topology control algorithms provide near optimal performance with significantly lower cost. For each algorithm, a rich set of simulation scenarios is presented that compare our novel algorithms performance to the optimal solution. Ultimately, we present a topology control algorithm that provides an efficient solution to the channel rental problem: finding the most cost-effective set of communication channels (for a wireless mesh network) at a minimum performance guarantee. This problem occurs in high-density traditional wireless networking, cellular networking, and rural sparse networking with last mile internet connectivity; topology control algorithms are well suited for all applications of wireless technology. These algorithms are shown to be robust against various network challenges including topology, frequency availability, and interference.
  • Thumbnail Image
    Item
    An empirical study of the stochastic evolution algorithm for the VLSI cell placement problem
    (Montana State University - Bozeman, College of Engineering, 2008) Thamizhmani, Natrajan; Chairperson, Graduate Committee: Year Back Yoo
    The Stochastic Evolution (SE) algorithm is a relatively new heuristic method that is used for combinatorial optimization that exploits an analogy between biological evolution and combinatorial optimization. The SE algorithm begins with a random initial solution or with a previously found good solution to the problem and simulates the evolution process by eliminating the bad characteristics of the older generation resulting in an improved newer generation solution. The SE algorithm achieves this using functions and operations which test the suitability of characteristics for the existing environment. Each characteristic of a species in the current generation has to prove its suitability under the existing environmental conditions in order to remain unchanged in the next generation. This process is repeated until a certain number of iterations is completed or until no significant improvement is noticed and the solution to the problem is obtained. In this project, the SE algorithm is studied and implemented to solve the very large scale integration cell placement problem, and the quality of the solutions and the running times of the algorithm are compared with those generated by the Simulated Annealing (SA) algorithm. The SE algorithm after experiments shows that it produces results that are comparable to the results that were generated by the SA algorithm. The SE algorithm seems to be suitable in cases where the size of the input is considerably large. The SE algorithm starts consuming more time than the SA algorithm as the size of the input increases. The feature in the SE algorithm which increases the number of trials if the newer generation is better than the older could increase the running time of the SE algorithm considerably.
Copyright (c) 2002-2022, LYRASIS. All rights reserved.