Theses and Dissertations at Montana State University (MSU)

Permanent URI for this collectionhttps://scholarworks.montana.edu/handle/1/733

Browse

Search Results

Now showing 1 - 7 of 7
  • Thumbnail Image
    Item
    Improving the confidence of machine learning models through improved software testing approaches
    (Montana State University - Bozeman, College of Engineering, 2022) ur Rehman, Faqeer; Chairperson, Graduate Committee: Clemente Izurieta; This is a manuscript style paper that includes co-authored chapters.
    Machine learning is gaining popularity in transforming and improving a number of different domains e.g., self-driving cars, natural language processing, healthcare, manufacturing, retail, banking, and cybersecurity. However, knowing the fact that machine learning algorithms are computationally complex, it becomes a challenging task to verify their correctness when either the oracle is not available or is available but too expensive to apply. Software Engineering for Machine Learning (SE4ML) is an emerging research area that focuses on applying the SE best practices and methods for better development, testing, operation, and maintenance of ML models. The focus of this work is on the testing aspect of ML applications by adapting the traditional software testing approaches for improving the confidence in them. First, a statistical metamorphic testing technique is proposed to test Neural Network (NN)-based classifiers in a non-deterministic environment. Furthermore, an MRs minimization algorithm is proposed for the program under test; thus, saving computational costs and organizational testing resources. Second, a Metamorphic Relation (MR) is proposed to address a data generation/labeling problem; that is, enhancing the test inputs effectiveness by extending the prioritized test set with new tests without incurring additional labeling costs. Further, the prioritized test inputs are leveraged to propose a statistical hypothesis testing (for detection) and machine learning-based approach (for prediction) of faulty behavior in two other machine learning classifiers i.e., NN-based Intrusion Detection Systems. Finally, to test unsupervised ML models, the metamorphic testing approach is utilized to make some insightful contributions that include: i) proposing a broader set of 22 MRs for assessing the behavior of clustering algorithms under test, ii) providing a detailed analysis/reasoning to show how the proposed MRs can be used to target both the verification and validation aspects of testing the programs under investigation, and iii) showing that verification of MR using multiple criteria is more beneficial than relying on using just a single criterion (i.e., clusters assigned). Thus, the work presented here results in providing a significant contribution to address the gaps found in the field, which enhances the body of knowledge in the emergent SE4ML field.
  • Thumbnail Image
    Item
    Automated techniques for prioritization of metamorphic relations for effective metamorphic testing
    (Montana State University - Bozeman, College of Engineering, 2022) Srinivasan, Madhusudan; Chairperson, Graduate Committee: John Paxton and Upulee Kanewala (co-chair)
    An oracle is a mechanism to decide whether the outputs of the program for the executed test cases are correct. In many situations, the oracle is not available or too difficult to implement. Metamorphic testing is a testing approach that uses metamorphic relations (MRs), properties of the software under test represented in the form of relations among inputs and outputs of multiple executions, to help verify the correctness of a program. Typically, MRs vary in their ability to detect faults in the program under test, and some MRs tend to detect the same set of faults. In this work, we aim to prioritize MRs to improve the efficiency and effectiveness of MT. We present five MR prioritization approaches: (1) Fault-based, (2) Coverage-based, (3) Statement Centrality-based, (4) Variable-based, and (5) Data Diversity-based. To evaluate these MR prioritization approaches, we conducted experiments on complex open- source software systems and machine learning programs. Our results suggest that the proposed MR prioritization approaches outperform the current practice of executing the source and follow-up test cases of the MRs randomly. Further, our results show that Statement Centrality-based and Variable-based approaches outperform Code Coverage and random-based approaches. Also, the proposed approaches show 21% higher rate of fault detection over random-based prioritization. For machine learning programs, the proposed Data Diversity-based MR prioritization approach increases the fault detection effectiveness by up to 40% when compared to the Code Coverage- based approach and reduces the time taken to detect a fault by 29% when compared to random execution of MRs. Further, all the proposed approaches lead to reducing the number of MRs that needs to be executed. Overall, our work would result in saving time and cost during the metamorphic testing process.
  • Thumbnail Image
    Item
    Analyzing the security of C# source code using a hierarchical quality model
    (Montana State University - Bozeman, College of Engineering, 2022) Harrison, Payton Rae; Chairperson, Graduate Committee: Clemente Izurieta
    In software engineering, both in government and in industry, there are no universal standards or guidelines for security or quality. There is an increased need for evaluating the security of source code projects, which is made apparent by the number of real-world cyber attacks that have taken place recently. Our research goal is to design and develop a security quality model that helps stakeholders assess the security of C# source code projects. While there are many analysis tools that can be used to identity security vulnerabilities, the use of a model is beneficial in integrating multiple analysis tools to have better coverage over the number of security vulnerabilities detected (compared to the use of a single tool) and to aggregate these vulnerabilities upward into a broader security quality context. We accomplished our goal by developing and validating a hierarchical security quality model (PIQUE-C#-Sec) to evaluate the security quality of software written in C#. This model is an operationalized model using PIQUE, or the Platform for Investigative software Quality Understanding and Evaluation. PIQUE-C#-Sec improves upon previous security quality models and quality models that precede it by focusing on being specific, flexible, and extensible. This thesis introduces the model design for PIQUE-C#-Sec and examines the results from the efforts of validating the PIQUE-C#-Sec model. This model was validated using sensitivity analysis, which consisted of collecting data on benchmark repositories and observing if and how the PIQUE-C#-Sec model output varied as a function of these repository attributes. Additionally, the model was analyzed by testing to see how the PIQUE-C#-Sec model node values changed because of the tools reporting additional vulnerabilities. Based on these results, we conclude that the PIQUE-C#-Sec model is effective for stakeholders to use when evaluating C# source code, and the model can be used as a security quality gate for evaluating these projects.
  • Thumbnail Image
    Item
    An extensible, hierarchical architecture for analysis of software quality assurance
    (Montana State University - Bozeman, College of Engineering, 2021) Rice, David Mark; Chairperson, Graduate Committee: Clemente Izurieta
    As software becomes integrated into most aspects of life, a need to assess and guarantee the quality of a software product is paramount. Poor software quality can lead to traffic accidents, failure of life-saving devices, government destabilization, and economic ruin. To assess software quality, quality researchers design quality models. A common quality model will decompose quality concepts such as 'total quality', 'maintainability', and 'confidentiality' into a hierarchy that can eventually be linked to specific lines of code in a software system. However, a problem persists in the domain of quality modeling: quality assessment through use of quality models is not finding acceptance by industry practitioners. This thesis reviews the weaknesses of modern modeling attempts and aims to improve the processes surrounding quality assessment from the perspective of both researchers and academic practitioners. The analysis uses the Goal/Question/Metric paradigm. Two closely related goals are presented that aim to analyze a process of generating, validating, and operationalizing quality models for the purpose of improvement with respect to cost, experimentative capability, collaborative opportunity, and acceptability. A system is designed, PIQUE, that provides functionality to generate experimental quality models. Test cases and exercises are run on the models generated by PIQUE to supply metric data used to answer the questions and goals. The results show that-in the context of a PIQUE-generated quality model compared to a similar non-PIQUE quality model-improvement can be achieved with respect to development cost and experimentative capability. Clear improvement was not found in the context of model operationalization difficulty and output acceptability. Ultimately, partial achievement of both goals is realized. The work concludes that the current problems in the domain of quality modeling can be improved upon, and systems like PIQUE are a valuable approach toward that goal.
  • Thumbnail Image
    Item
    Mitigating software engineering costs in distributed ledger technologies
    (Montana State University - Bozeman, College of Engineering, 2018) Heinecke, Jonathan Taylor; Chairperson, Graduate Committee: Mike Wittie
    Distributed ledger technologies (DLTs) are currently dominating the field of distributed systems research and development. The Ethereum blockchain is emerging as a popular DLT platform for developing software and applications. Several challenges in Ethereum software development are the complex nature of working with DLTs, the lack of tools for developing on this DLT, and poor documentation of concepts for DLT developers. In this thesis, we provide building blocks that reduce the complexity of DLT operations and lower the barrier to entry into DLT development. We do this by providing a Node.js library, Ethereum-Easy, that simplifies operations on Ethereum. We implement this library into a sample application called Rock, Paper, Scissors (RPS) and built a continuous delivery, continuous integration pipeline for deploying Ethereum code (Jenk-Thereum). This thesis aims to make development on DLTs easier, quicker, and less expensive.
  • Thumbnail Image
    Item
    Exploratory study on the effectiveness of type-level complexity metrics
    (Montana State University - Bozeman, College of Engineering, 2018) Smith, Killian; Chairperson, Graduate Committee: Clemente Izurieta
    The research presented in this thesis analyzes the feasibility of using information collected at the type level of object oriented software systems as a metric for software complexity, using the number of recorded faults as the response variable. In other words, we ask the question: Do popular industrial language type systems encode enough of the model logic to provide useful information about software quality? A longitudinal case study was performed on five open source Java projects of varying sizes and domains to obtain empirical evidence supporting the proposed type level metrics. It is shown that the type level metrics Unique Morphisms and Logic per Line of Code are more strongly correlated to the number of reported faults than the popular metrics Cyclomatic Complexity and Instability, and performed comparably to Afferent Coupling, Control per Line of Code, and Depth of Inheritance Tree. However, the type level metrics did not perform as well as Efferent Coupling. In addition to looking at metrics at single points in time, successive changes in metrics between software versions was analyzed. There was insufficient evidence to suggest that the metrics reviewed in this case study provided predictive capabilities in regards to the number of faults in the system. This work is an exploratory study; reducing the threats to external validity requires further research on a wider variety of domains and languages.
  • Thumbnail Image
    Item
    Technical debt management in release planning : a decision support framework
    (Montana State University - Bozeman, College of Engineering, 2014) Griffith, Isaac Daniel; Chairperson, Graduate Committee: Clemente Izurieta; Hanane Taffahi, David Claudio and Clemente Izurieta were co-authors of the article, 'Initial simulation study' in the journal 'Proceedings of the 2014 Winter Simulation Conference' which is contained within this thesis.
    Technical debt is a financial metaphor used to describe the tradeoff between the short term benefit of taking a shortcut during the design or implementation phase of a software product (e.g., in order to meet a deadline) and the long term consequences of taking said shortcut, which may affect the quality of the software product. Recently, academics and industry practitioners have offered several models and methods which purport to explain or manage this phenomenon. Unfortunately, to date, there has yet to be a framework which supports managers in making decisions regarding technical debt. Although similar solutions exist to support the release planning phase of software development, they focus on the management of new features and do not take into account issues relating to technical debt and its effects on the development process. This thesis describes a software engineering decision support system focusing on three key components: analysis and decision, intelligence, and simulation. Supporting each of these components is a meta-model which bridges the gap between technical debt management and software release planning. To investigate the development of the analysis and decision and intelligence components we used a reduced form of this meta-model in conjunction with a coalition formation games approach. This approach served to evaluate the technical debt management and release planning issues, and was found superior, using simulation, in comparison to a first-come, first-served method (representative of typical agile planning processes). To investigate the development of the simulation component we conducted a simulation study to evaluate different strategies for technical debt management as proposed in the literature. The results of this study provide compelling evidence for current technical debt management strategies proposed in the literature that can be immediately applied by practitioners. Finally, we describe the initial work on an extended simulation framework which will form the basis of a complete simulation component for a technical debt management and release planning decision support framework.
Copyright (c) 2002-2022, LYRASIS. All rights reserved.