Show simple item record

dc.contributor.advisorChairperson, Graduate Committee: John Shepparden
dc.contributor.authorFortier, Nathan Leeen
dc.date.accessioned2017-01-21T17:49:52Z
dc.date.available2017-01-21T17:49:52Z
dc.date.issued2015en
dc.identifier.urihttps://scholarworks.montana.edu/xmlui/handle/1/10134
dc.description.abstractWhile Bayesian networks provide a useful tool for reasoning under uncertainty, learning the structure of these networks and performing inference over them is NP-Hard. We propose several heuristic algorithms to address the problems of inference, structure learning, and parameter estimation in Bayesian networks. The proposed algorithms are based on Overlapping Swarm intelligence, a modification of particle swarm optimization in which a problem is broken into overlapping subproblems and a swarm is assigned to each subproblem. The algorithm maintains a global solution that is used for fitness evaluation, and is updated periodically through a competition mechanism. We describe how the problems of inference, structure learning, and parameter estimation can be broken into subproblems, and provide communication and competition mechanisms that allow swarms to share information about learned solutions. We also present a distributed alternative to Overlapping Swarm Intelligence that does not require a global network for fitness evaluation. For the problems of full and partial abductive inference, a swarm is assigned to each relevant node in the network. Each swarm learns the relevant state assignments associated with the Markov blanket for its corresponding node. In our approach to parameter estimation, a swarm is associated with each node in the network that corresponds to either a latent variable or a child of a latent variable. Each node's corresponding swarm learns the parameters associated with that node's Markov blanket. We also apply Overlapping Swarm Intelligence to several variations of the structure learning problem: learning Bayesian classifiers, learning Bayesian networks with complete data, and learning Bayesian networks with latent variables. For each problem, a swarm is associated with each node in the network. This work makes a number of contributions relating to the advancement of Overlapping Swarm Intelligence as a general optimization technique. We demonstrate the applicability of Overlapping Swarm Intelligence to both discrete and continuous optimization problems. We also examine the effect of the swarm architecture and degree of overlap on algorithm performance. The experiments presented here demonstrate that, while the sub-swarm architecture affects algorithm performance, Overlapping Swarm Intelligence continues to perform well even when there is little overlap between the swarms.en
dc.language.isoenen
dc.publisherMontana State University - Bozeman, College of Engineeringen
dc.subject.lcshArtificial intelligence.en
dc.subject.lcshGraphical modeling (Statistics).en
dc.subject.lcshAlgorithms.en
dc.titleInference and learning in Bayesian networks using overlapping swarm intelligenceen
dc.typeDissertationen
dc.rights.holderCopyright 2015 by Nathan Lee Fortier.en
thesis.degree.committeemembersMembers, Graduate Committee: John Sheppard (chairperson); Clemente Izurieta; Mike Wittie; John Paxton.en
thesis.degree.departmentComputer Science.en
thesis.degree.genreDissertationen
thesis.degree.namePhDen
thesis.format.extentfirstpage1en
thesis.format.extentlastpage140en
mus.relation.universityMontana State University - Bozemanen_US
mus.data.thumbpage119


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record