MAKING SENSE OF CULTURES OF ASSESSMENT AND THEIR IMPACT ON STUDENT LEARNING: A QUALITATIVE META-SYNTHESIS by Mandy Lynn Wright A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Education in Curriculum and Instruction MONTANA STATE UNIVERSITY Bozeman, Montana May 2022 ©COPYRIGHT by Mandy Lynn Wright 2022 All Rights Reserved i i DEDICATION To Lawrence, Lawson, and Scarlett. I could not have achieved this goal without your compassion, patience, and love. You mean everything to me. Now, let’s have some fun! In loving memory of my grandparents, Al and Sonia Audet. I wish you could be here to enjoy this moment with me. ii i ACKNOWLEDGEMENTS I have had the great fortune to experience many people’s support throughout my doctoral studies. Thank you to my mentor and chair extraordinaire, Dr. Ann Ewbank. Your kind guidance, reframing skills, and open mind gave me confidence and made me feel seen and heard. Much gratitude goes to my committee members, Dr. Rachel Anderson, Dr. Sandra Bauman, and Dr. Robert Carson. I so appreciate your feedback and willingness to participate in my doctoral work. To Leanne, Elfie, Steve, and the many other friends and colleagues at Great Falls College who stopped to ask how things were going and share words of encouragement, your comments often came at just the right time. I especially want to thank Becky and Ashlynn for your frequent cheerleading and editing help. Leigh Ann, thank you for the coffee dates and for letting me process all the things. LeAnn, Maren, and Emily, thank you for your generous and constant friendship. Meg, my dissertation BFF, you have my infinite gratitude and admiration. To my parents, thank you for always being there and pushing me to do my best. iv TABLE OF CONTENTS 1. CHAPTER ONE: INTRODUCTION ......................................................................................... 1 Background ................................................................................................................................ 1 Context for the Study ................................................................................................................. 3 Statement of the Problem ........................................................................................................... 6 Purpose of the Study ................................................................................................................... 9 Rationale for Methodology ............................................................................................... 11 Research Questions .................................................................................................................. 11 Conceptual Framework ............................................................................................................ 12 Operational Definitions ............................................................................................................ 14 Assumptions ............................................................................................................................. 17 Limitations ................................................................................................................................ 18 Delimitations ............................................................................................................................ 19 Significance of the Study ......................................................................................................... 19 Chapter One Summary ............................................................................................................. 20 2. CHAPTER TWO: REVIEW OF THE LITERATURE ............................................................ 21 Introduction .............................................................................................................................. 21 Cultures of Assessment ............................................................................................................ 22 What is Learning? .................................................................................................................... 25 Learning in Postsecondary Education ...................................................................................... 28 Learner-Centered Environments ....................................................................................... 28 Adult Learning Models ..................................................................................................... 29 Assessment Practices ........................................................................................................ 31 Continuous Improvement ......................................................................................................... 31 Programmatic Assessment of Student Learning....................................................................... 34 Defining Program Assessment .......................................................................................... 34 Systems Perspective .......................................................................................................... 36 Classroom Assessment...................................................................................................... 37 Concerns About Assessment .................................................................................................... 38 Faculty Development in a Culture of Assessment ................................................................... 44 Chapter Two Summary ............................................................................................................ 47 3. CHAPTER THREE: RESEARCH DESIGN & METHODOLOGY ....................................... 49 Introduction .............................................................................................................................. 49 Researcher Positionality & Interpretive Framework ................................................................ 51 Positionality ...................................................................................................................... 52 Interpretive Framework .................................................................................................... 53 Method ...................................................................................................................................... 54 Qualitative Meta-Synthesis ............................................................................................... 54 v TABLE OF CONTENTS CONTINUED Data Collection Process ............................................................................................................ 57 Review Questions and Search Terms ............................................................................... 58 Selection and Sampling..................................................................................................... 59 Data Sources and Search Strategies .................................................................................. 62 Exploratory Searches ............................................................................................ 63 Scoping Searches .................................................................................................. 68 Data Analysis Strategies ........................................................................................................... 73 Formal Identification of Literature ................................................................................... 74 Initial Assessment of Studies ............................................................................................ 75 Final Literature Selection Process .................................................................................... 77 Analysis and Coding ......................................................................................................... 79 Ethical Considerations .............................................................................................................. 82 Methods of Achieving Authenticity, Trustworthiness, and Credibility ................................... 82 Chapter Three Summary .......................................................................................................... 84 4. CHAPTER FOUR: FINDINGS ................................................................................................ 86 Introduction .............................................................................................................................. 86 Key Theme 1: Changes to Learning Conditions ...................................................................... 90 Improvement Through Targeted Instructional Practice .................................................... 90 Improved Faculty-Student Interactions............................................................................. 92 Key Theme 2: Changes Through Reciprocal Capacity-Building ............................................. 94 Improvement Through Data Use ...................................................................................... 94 Students and Faculty Learning Together .......................................................................... 95 Self-Reflection as a Metacognitive Practice ......................................................... 95 Student Success Promoting Sustained Practice .................................................... 96 Faculty Learning from Each Other ....................................................................... 97 Key theme 3: Changes in Faculty and Student Mindsets ......................................................... 98 Faculty Development Leading to Changes in Beliefs....................................................... 98 Student Perception of Learning Improvement .................................................................. 99 Application of New Knowledge ..................................................................................... 100 Ancillary Themes ................................................................................................................... 100 Ancillary Theme 1: Characteristics of Learning-Oriented Faculty Development ................. 101 Responsiveness to Faculty Readiness ............................................................................. 102 Collaborative, Structured, and Focused Professional Development ............................... 103 Ancillary Theme 2: Accepting or Rejecting Audit Culture.................................................... 104 Audit Culture Characteristics .......................................................................................... 104 Compliance Orientation ...................................................................................... 105 Nonintegrated Assessment .................................................................................. 105 Misalignment of Institutional Resources ............................................................ 106 Rejecting Audit Culture .................................................................................................. 107 Valuing Teaching and Learning.......................................................................... 107 v i TABLE OF CONTENTS CONTINUED Contextualizing Inquiry ...................................................................................... 107 Faculty Development Leading to Organizational Changes ................................ 108 Chapter Four Summary .......................................................................................................... 109 5. CHAPTER FIVE: DISCUSSION ........................................................................................... 110 Introduction ............................................................................................................................ 110 Research Questions ................................................................................................................ 111 Overview of the Study ............................................................................................................ 111 Conclusions and Related Literature........................................................................................ 112 New Conceptual Framework .................................................................................................. 118 Conclusions Related to RQ1 ........................................................................................... 113 Conclusions Related to RQ2 ........................................................................................... 114 General Conclusions ....................................................................................................... 116 Limitations .............................................................................................................................. 121 Recommendations for Future Research.................................................................................. 122 Implications for Practice......................................................................................................... 123 Concluding Thoughts ............................................................................................................. 125 REFERENCES CITED............................................................................................................... 126 APPENDICES ............................................................................................................................ 141 APPENDIX A: RQ1 Search Strategies .......................................................................... 142 APPENDIX B: RQ2 Search Strategies ........................................................................... 159 APPENDIX C: Formal Literature Selection ................................................................... 182 vi i LIST OF TABLES Table Page 1. RQ1 SPIDER Search Strategy Tool. ............................................................................ 58 2. RQ2 SPIDER Search Strategy Tool. ............................................................................ 59 3. Required and Related Search Terms. ............................................................................ 63 4. Summarized RQ1 Library Catalog Scoping Search. .................................................... 70 5. Summary of Key Themes and Related Codes. ............................................................. 88 6. Summary of Ancillary Themes. .................................................................................. 101 vi ii LIST OF FIGURES Figure Page 1. Conceptual Framework. ................................................................................................ 14 2. Comparison of Meta-Synthesis and Meta-Analysis ..................................................... 55 3. Summary Diagram of the Search Process. .................................................................... 73 4. Decision Tree for Final Study Selection. ...................................................................... 78 5. Thematic Analysis from Axial Coding ......................................................................... 81 6. Conceptual Model of a Culture of Learning. .............................................................. 119 ix ABSTRACT Although cultures of assessment are frequently referenced in institutional effectiveness literature, higher education institutions in the United States continue to experience challenges with demonstrating student learning improvement. This study sought to identify evidence suggesting the broad impact of cultures of assessment on improved student learning outcomes and evidence suggesting the specific effect of faculty professional development in pedagogy and assessment on improved student learning outcomes. Using qualitative meta-synthesis methodology, the findings of fourteen empirical studies were analyzed, deconstructed, and reconstructed. This analysis led to the emergence of three key themes: changes to learning conditions, changes through reciprocal capacity-building, and changes in faculty and student mindsets. While the findings did not offer conclusive evidence in response to the study’s research questions, they did lead to recommendations for improved practice in higher education, particularly the need to adopt a learning orientation toward student learning assessment. 1 CHAPTER ONE INTRODUCTION Background The commonly stated purpose of student learning outcomes assessment in higher education, primarily at the program and institutional levels, is to continually improve student learning through the use of evidence obtained through an inquiry process, leading to evidence- based changes in the teaching and learning process (Banta & Blaich, 2011; Bresciani Ludvik, 2019). Using the lens of an improvement paradigm, rather than prioritizing compliance (Ewell, 2009), assessment leaders are encouraged to create favorable conditions that generate faculty trust, acknowledge faculty strengths, guide through resistance, and support continued growth and empowerment (Leaderman & Polychronopoulos, 2019). It is thought that these favorable conditions result in the co-creation of inclusive assessment models where student learning improvements are the primary emphasis (Bresciani Ludvik, 2019; Huba & Freed, 2000; Jankowski & Marshall, 2017; Roscoe, 2017). While the stated purpose of outcomes assessment, as well as the goal of using an improvement paradigm to create a faculty-driven culture of assessment that leads to improvements in student learning, are widely accepted and recognized in much of higher education, how positive cultures of assessment are created and whether they achieve their stated goal of improving student learning is difficult to identify (Fuller, 2011). Much of the assessment literature speaks to the importance of establishing a culture of assessment, but how those cultures are established and maintained is often left to the reader's interpretation (Fuller, 2011; Fuller et 2 al., 2016). Although the recommendation that assessment should be a faculty-driven effort (Huba & Freed, 2000; Suskie, 2018; Walvoord, 2010) is commonly heard, faculty and administrators may not understand the broader value and importance of assessment, resources may be inconsistently allocated to assessment activities, including training and professional development, and faculty may experience resistance to change, including fear of potential punishment for poor assessment results (Bowker, 2016; Suskie, 2018). Notably, outcomes assessment is highly contextualized within institutions, particularly regarding data collection methods and advancing equity efforts (Jankowski et al., 2018; Montenegro & Jankowski, 2020). A fundamental principle in approaches to assessment is that “there is not ‘one right way’ to assess student learning” (Jankowski et al., 2018, p. 12). By acknowledging the distinctive qualities of higher education institutions’ organizational cultures, it is difficult to determine whether or how the existence of an institutional culture of assessment impacts student learning. However, one aspect of what might constitute a functional culture of assessment may provide insight into direct impacts on student learning—faculty development (Weiner, 2009). Identified as a practice likely to have the most significant effect on student learning improvement, faculty development, particularly in pedagogy and assessment, may be one way to improve student learning outcomes at scale (Eubanks, 2021; Eubanks & Fulcher, 2021). This study sought to interrogate the role of an institutional culture of assessment in demonstrating improved student learning outcomes in higher education. Through a qualitative meta-synthesis of existing qualitative and mixed-methods research studies, I identified studies related to the broad impact of cultures of assessment on improved student learning outcomes and 3 the specific effect of faculty professional development in pedagogy and assessment of student learning outcomes. Context for the Study While standardized testing, targeted review of student achievement data, and an emphasis on accountability can be dated to the early years of the 20th century (Shavelson, 2007), what much of American higher education knows as the “assessment movement” began in the mid- 1980s (Ewell & Cumming, 2017). Two lines of conversation emerged and continue, one of which is centered on approaching assessment work as a form of scholarship and process of inquiry to improve student learning. Concurrently, conversations about the need for educational institutions to demonstrate more accountability for student learning occurred, leading to the eventual dichotomous purpose of institutional assessment: improvement and accountability (Ewell, 2009; Ewell & Cumming, 2017). Although conversations about assessment, accountability, and institutional improvement in higher education began more than thirty years ago, the current state of student learning assessment might be traced to the No Child Left Behind Act (No Child Left Behind [NCLB], 2002), which served as a catalyst for increased scrutiny of education at all levels (Roscoe, 2017). The subsequently increased focus on assessment and accountability became a priority in higher education following a publication by the Secretary of Education’s Commission on the Future of Higher Education, known as the Spellings Report (U.S. Department of Education, 2006). By developing policies and practices to facilitate the outcomes-based assessment of student learning (Roscoe, 2017), higher education institutions and accrediting bodies sought to implement a proactive response to the Spellings Report and the accountability and improvement initiatives 4 taking place in K-12 education in the early 2000s. These two publications contributed to the paradigm shift Chick (2018) described as a change from colleges providing instruction to colleges providing learning. Before the publication of the No Child Left Behind Act (2002) and the Spellings Report (U.S. Department of Education, 2006), professional and non-profit higher-education organizations like the Association of American Colleges and Universities (AAC&U) worked to encourage effective teaching, learning, and assessment practices (Rhodes, 2017). Recognizing the dual purpose of assessment for learning and assessment for accountability and the need to create a foundation for the outcomes assessment movement and avoid the federal push toward standardized testing, AAC&U developed a proposal to support outcomes assessment through the Valid Assessment of Learning in Undergraduate Education (VALUE) project. This project later resulted in rubrics developed to assess several areas of knowledge common to American higher education, with the broader goal of providing data for accountability purposes (Rhodes, 2016). This is but one example of national professional organizations attempting to guide outcomes assessment practices in higher education and further the scholarship of teaching and learning. Unfortunately, after decades of data collection and discussion about effective student learning assessment methods and measures, a tension remains between seeking improvement in student learning outcomes and an external focus on accountability (Eubanks & Fulcher, 2021; Ewell, 2009; Ewell & Cumming, 2017). While measures of institutional effectiveness include operational evaluations, such as strategic planning and program review, in addition to student learning (Welsh & Metcalf, 2003), developing a process for collecting evidence of student learning and then using that evidence 5 effectively often proves the most difficult for institutions, at times leading to administratively driven approaches that are high in compliance and low in faculty ownership (Andrade, 2011; Walvoord, 2010). One challenge in establishing effective outcomes assessment models is the tension between adopting administratively driven models that may not engage faculty as partners and situating outcomes assessment as a reflective, collaborative process (Bresciani Ludvik, 2019; Suskie, 2018) that can acknowledge faculty experience and ability to meet student needs (Roscoe, 2017). In seeking the ideal outcomes assessment process, institutions are encouraged to situate accountability as an internal commitment, with assessment work naturally emerging from a desire for inquiry (Banta & Palomba, 2015; Bresciani Ludvik, 2019; Maki, 2010). Beyond procedural details, student learning outcomes assessment typically involves academic programs and departments working to determine how students have met the learning goals articulated at the program and institutional levels (Maki, 2010). Institutions are encouraged to develop cultures of assessment, or cultures of evidence (Suskie, 2018), where faculty regularly collaborate on assessment implementation and collection, reflect upon results, and take evidence- based action to improve student learning (Fuller & Skidmore, 2014; Kuh et al., 2015; Ndoye & Parker, 2010; Suskie, 2014). Encouraging collaboration through reflection and facilitated discussions are meant to promote ownership of the assessment process, potentially identifying opportunities for improvement in teaching and learning (Bresciani Ludvik, 2019; Suskie, 2018, Walvoord, 2010). In addition to intentional faculty development opportunities (Weiner, 2009), these conditions are thought to be factors in establishing an effective culture of assessment. Accountability mandates will not disappear, particularly as questions about the value of higher education persist (Bresciani Ludvik, 2019). Despite faculty and institutional attempts to 6 engage in student learning assessment, these efforts stop short of adequately demonstrating meaningful data reflecting what students know and can do (Eubanks & Fulcher, 2021; Fulcher et al., 2017). The prolific body of assessment literature written in the past thirty years and accountability policies like performance-based funding have not effected change. Efforts of regional accreditors to offer structured guidance and support (Provezis, 2010) may be misconstrued as encouraging a compliance mindset. However, comments from regional accreditors regarding faculty engagement in student learning assessment are typically supportive of faculty determining what constitutes evidence of student learning, as reflected in a statement from Dr. Barbara Johnson of the Higher Learning Commission: “It is important to help faculty understand assessment is not being done just because an accreditation visit is approaching, but to focus on improving student learning as a priority for the institution” (Welsh, 2018). However, championing the concept of a culture of assessment remains a frequent theme in most accreditation and assessment literature. Although organizational culture may be a factor in effective student learning assessment measures, the perceived impact of a “culture of assessment” may require more substantive analysis (Kezar & Eckel, 2002; Kezar, 2013). Statement of the Problem Higher education faces demands for accountability from many stakeholders, including students, parents, legislators, accreditors, and the general public (Eubanks & Fulcher, 2021; Ewell, 2009; Ewell & Cumming, 2017). Ensuring that student learning is assessed fairly and consistently is essential for the students themselves and institutional viability. However, debate or confusion about the value and purpose of outcomes assessment, perceived resistance on the part of faculty, and a variety of other organizational, political, and philosophical concerns have 7 led to minimal observable improvements in student learning for many campuses (Blaich & Wise, 2011; Bresciani Ludvik, 2019; Eubanks & Fulcher, 2021; Fulcher et al., 2017). The belief that creating a culture of assessment is critical to the success of the assessment movement is pervasive in the rhetoric of continuous improvement and institutional effectiveness (Fuller, 2011; Fuller et al., 2016; Hersh & Keeling, 2013; Suskie, 2014; Suskie, 2018). When considering cultures of assessment, it is insufficient for institutions to assume that creating an administratively driven reporting process and demanding compliance will have better results than one owned and maintained by faculty (Hersh & Keeling, 2013; Hutchings, 2011; Kuh et al., 2015; Maki, 2010; Ndoye & Parker, 2010; Walvoord, 2010; Welsh & Metcalf, 2003). However, it is problematic to assume that institutions and stakeholders are familiar with meaningful program and institutional assessment practices. Establishing continuous improvement systems focused on student learning can be complicated for a variety of reasons, including a lack of knowledge of how to apply assessment results, a lack of understanding of effective data collection methods, and a misunderstanding of the purpose of assessment for continuous improvement (Banta & Palomba, 2015; Bresciani Ludvik, 2019; Coates, 2015; Ewell, 2009; Hutchings et al., 2015). While most assessment literature related to programmatic and institutional assessment contends that an institutional culture of assessment is crucial to creating lasting impacts on student learning, there appears to be a gap in scholarly discussions examining how a culture of assessment is created, what a strong culture of assessment looks like, and the efficacy of the practices associated with cultures of assessment (Bresciani Ludvik, 2019; Fuller et al., 2016; Kezar, 2013; Skidmore et al., 2018). Additionally, much of the assessment literature appears to 8 assume that faculty, regardless of institution type, degree program, and varying preparatory requirements, have been formally trained to teach (Banta & Palomba, 2015; Boyer, 1990; Burns, 2017; Burnstad & Hoss, 2010; Fink, 2013; Smith, 2001). Further, scholarly discussions of student learning assessment tend to focus on assessment methods and perpetuate the concept of assessment cultures without critically examining whether and how they affect student learning (Fuller, 2011; Fuller et al., 2016; Kezar, 2013). Complicating the issue of student learning assessment is the reality that “assessment, even conducted with pristine methodology, rarely catalyzes improvement efforts” (Eubanks & Fulcher, 2021, p. 3). While elements of effective cultures of assessment have been posited and proliferated in literature (Maki, 2010; Weiner, 2009), existing research on cultures of assessment has been criticized as lacking a solid empirical or theoretical foundation (Fuller & Skidmore, 2014; Kezar, 2013; Ndoye & Parker, 2010). Eubanks and Fulcher (2021) have also expressed concerns about regional accreditors’ standards resulting in a focus on process, leading to a compliance or “checkbox” mentality. While the reasons for problems with institutional improvement efforts are myriad and perhaps contextual (Blaich & Wise, 2011), a significant concern is that there remains a lack of readily identifiable evidence of the impact cultures of assessment have on student learning improvement (Eubanks & Fulcher, 2021; Fuller et al., 2016; Guetterman & Mitchell, 2016; Kezar, 2013). As a faculty development and assessment professional at a small two-year college, I have expended significant effort seeking methods to engage faculty in creating an institutional culture of assessment. As I read a variety of literature written by luminaries in the assessment field, I began to realize that few cohesive recommendations for building a culture of assessment were 9 offered, that how these cultures are defined varies amongst scholars, and, surprisingly, few references were made to the impact of a culture of assessment on student learning outcomes. Kezar (2013) offered critiques regarding the construct of institutional culture, including a lack of clear consensus on the definition of culture, differing understandings about the role culture plays in assessment (as an outcome or a process), and the dearth of research studying culture beyond single case studies. Some researchers, such as Guetterman and Mitchell (2016), have sought to address Kezar’s (2013) concerns and fill a gap in the literature. Although criticisms of the existing literature on cultures of assessment may be accurate, perhaps there is an opportunity to view the current literature thematically with the intent to gain new insight and inform future practice. Purpose of the Study Based on criticism of existing research (see: Eubanks & Fulcher, 2021; Fuller et al., 2016; Fuller & Skidmore, 2014; Guetterman & Mitchell, 2016; Kezar, 2013; Ndoye & Parker, 2010), there appears to be a need for identifying evidence-based institutional effectiveness practices, specifically interrogating the role that establishing a culture of assessment plays in improving student learning outcomes. Rather than accepting the “wisdom of practice” (Weimer, 2001) associated with commonly held beliefs about the value of cultures of assessment, this study will analyze existing research and literature to identify whether and how cultures of assessment impact student learning outcomes in higher education. One challenge of analyzing existing research focused on cultures of assessment is that several concurrent elements are thought to establish such cultures, including having general education goals, using consistent terminology, and the presence of administrative support, to 1 0 name only a few (Weiner, 2009). Self-studies conducted for regional accreditation or external validation through initiatives like the Excellence in Assessment Designation (Excellence in Assessment (EIA) Designation - NILOA, n.d.) are based primarily on self-reporting and identification of the presence or absence of specific cultural elements. These challenges indicate that focusing on one aspect of a culture of assessment, e.g., ongoing faculty professional development focused on instructional and assessment strategies (Guetterman & Mitchell, 2016), may lead to more relevant data than studying assessment cultures holistically. Ongoing faculty professional development is one of Weiner’s (2009) identified elements of a culture of assessment. The presence of faculty development on college campuses is one way for an institution to “demonstrate its commitment to assessment and [raise] expectations among faculty” (Weiner, 2009, p. 29). One might also argue that professional development offerings may indicate other elements of a culture of assessment, implying that the institution has allocated resources to improve teaching and learning. Faculty participation in professional development programming suggests a willingness to develop as an educator to enhance student learning experiences, thus potentially leading to faculty ownership of the assessment process. Further, faculty directly impact student learning experiences and, with increased knowledge of pedagogy and assessment practices, can improve student learning during instruction (Eubanks & Fulcher, 2021). Therefore, analyzing research identifying connections between faculty development and improved student learning outcomes may serve as a valuable metric for understanding the impact of assessment cultures on student learning. 1 1 Rationale for Methodology Qualitative inquiry encourages the “exploration of wicked problems [which] are messy, circular or aggressive; they have not a single right solution" (Savin-Baden & Major, 2013, p. 5). Qualitative inquiry also supports the desire to understand individuals’ experiences and perceptions to co-create meaning and a shared body of knowledge (Creswell & Poth, 2018). Using qualitative inquiry for this study was appropriate because there is no single solution to developing organizational cultures of assessment. The issue of student learning assessment, while discussed broadly, remains contextualized in the institution (Blaich & Wise, 2011; Hersh & Keeling, 2013). Through a thematic, qualitative meta-synthesis (Booth, 2016; Thorne, 2008; Ong et al., 2020), I sought to identify literature and studies examining the broad impact of cultures of assessment on student learning outcomes, as well as the effects of targeted faculty development on improved student learning outcomes. This methodology shares a similarity to grounded theory. It allows the researcher to stay close to the data and form a “conception about what is taking place in a particular situation” (Savin-Baden & Major, 2013, p. 183). Research Questions This study examined two questions regarding the construct of cultures of assessment to interpret the potential influence of an institutional culture of assessment on improved student learning outcomes. 1. What evidence suggests that a culture of assessment improves student learning outcomes in higher education? 1 2 2. What evidence suggests that faculty professional development emphasizing pedagogical change and assessment strategies improves student learning outcomes in higher education? Conceptual Framework The conceptual framework guiding the emergent design of the study integrates my constructivist paradigmatic views with Sense-Making theory, as interpreted by Dervin (1999). Professional development has been situated as an opportunity for faculty to experience intellectual challenges that may result in greater personal and professional satisfaction (Angelo & Cross, 1993). While intellectual growth and potentially increased satisfaction are meaningful, ensuring that faculty are well-prepared to teach their classes is an institutional imperative. Some faculty, particularly at two-year colleges, come to the profession with teaching experience, while others have no formal preparation (Burnstad & Hoss, 2010). The institutional responsibility to employ good teaching practices in the classroom has also been closely connected to faculty career success and student academic success (Boyer, 1990; Smith, 2001; Fink, 2013; Burnstad & Hoss, 2010). These connections are directly related to the assumption that student learning assessment must be faculty-owned and faculty-driven. There is a logical connection between professional development and the ability of faculty members to engage in meaningful assessment: “Assuming that better assessment occurs when faculty members have the ability to make more informed decisions about student learning, some have proposed that a supportive environment is a prerequisite for quality assessment” (Guetterman & Mitchell, 2016). 1 3 Because much research on assessment cultures is theoretical or based on small case studies (Kezar, 2013; Skidmore et al., 2018), Sense-Making Theory (Dervin, 1999) applied to the study of assessment culture literature was a good fit. Considered both a meta-theory and a methodology, Sense-Making examines information design and provides a philosophical lens through which literature on cultures of assessment might be synthesized, leading to the development of new, co-created meaning: The resolution is that in the face of differences we must look not for differences in how humans, individually and collectively, see their worlds, but differences in how they make their worlds. In this view, there is more than a mandate for understanding how others see the world, there is an ontological necessity. If we conceptualize humans as struggling through an incomplete reality then the strugglings of others may well be informative to our own. (Dervin, 1999, n.p.) Sense-Making as a meta-theory aligns with the idea that the landscape of the assessment movement appears to be an incomplete reality, with multiple, sometimes competing, world views. As illustrated in figure 1, the dual nature of Sense-Making supports the goals of this study—to find empirical research and create new meaning from it. As a methodology, Sense- Making examines information design and how people understand the process of searching for and interpreting information. As a meta-theory, it connects to the broader construction of social interactions that lead to new understandings and meaning-making (Jonson et al., 2017). Through meta-synthesis, the study’s analysis deconstructed and reconstructed the work of others to become more informed and generate new knowledge that may not have existed in prior scholarly conversations. 1 4 Figure 1. Conceptual Framework. Operational Definitions I acknowledge that the following terms may be used differently throughout the related literature. For this study, these terms will be used as defined here. Active learning Instructional methods that engage students in the learning process. Learning activities are typically participatory and require students to practice metacognition. Types of active learning include collaborative learning, cooperative learning, and problem-based 15 learning. These experiences are centered around constructivist theory and move past instructor-centered teaching models (Bonwell & Eison, 1991; Prince, 2004). Authentic assessment Assessments of student learning that require the application of learning to new, typically ‘real-world’ contexts. Contrasted with conventional test-based assessments, authentic assessments typically involve complex tasks. They can more effectively engage students in the learning process, integrate various skills, and require higher-order thinking skills (Wiggins, 1998). Continuous improvement An exercise in curiosity and self-reflection (Bresciani Ludvik, 2019; Volkwein, 2011) within an improvement paradigm (Ewell, 2009). Taking action to satisfy one's interest, then sharing and using the results to make changes to practice, policy, and programming at the course, program, and institutional levels. Culture of assessment Scholars studying the assessment movement and related practices conceptualize cultures of assessment similarly: institutional conditions and procedures that support or impede student learning assessment and the use of that data to support decision-making (Fuller et al., 2016; Maki, 2010; Weiner, 2009). Cultures of assessment are widely discussed and advocated in assessment literature, but little concrete information is offered regarding the establishment and ultimate efficacy of this concept. 1 6 Faculty development For this study, faculty development refers to workshops and training primarily intended to support effective practice in instruction and assessment, typically offered through an institution’s center for teaching and learning. In higher education, faculty development frequently refers to external opportunities, such as conferences or webinars emphasizing development in content-area expertise. This may also include inadequate one-shot campus training focused on compliance practices or brief coverage of skill development (Grannan & Calkins, 2018). Learning/learning improvement For this study, learning is conceived as a dynamic, complex process involving a combination of interactions and experiences that support meaning-making, culminating in a lasting change in behavior or knowledge. Student learning outcomes assessment An improvement-oriented process intended to focus primarily on student learning, reflective inquiry, the involvement of multiple stakeholders, and acting on evidence (Bresciani Ludvik, 2019; Jankowski & Marshall, 2017). Emphasizing the assessment of established learning goals, outcomes assessment is intended to improve teaching and student learning and create a culture of continuous improvement in educational institutions (Suskie, 2018), with external accountability as an overarching but not singular expectation. Assessment typically occurs at the course, program, and institutional levels, with course-level data often supplying information for programs and the institution. For this study, the term “assessment” may be used instead of “student 17 learning outcomes assessment” and refers primarily to assessing student learning at the program and institutional levels. Assumptions By exploring this problem, I made the following assumptions: 1. The construct of cultures of assessment lends itself to examination because there are inherent and systemic problems in how student learning outcomes assessment has been characterized and interpreted at the national, regional, and institutional levels. 2. Faculty want to create effective and supportive learning environments for their students and find value in participating in professional development programming to support the development and improvement of teaching and learning practices. 3. Institutions featured in studies and literature seeking to draw connections between organizational culture and student learning improvement will have at least some of the components of a culture of assessment, as identified by Weiner (2009). 4. The presence or availability of faculty development programming related to instructional and assessment practices may indicate an institutional commitment to improving student learning outcomes. 5. To understand the connections between a culture of assessment and improved learning, faculty development can serve as a tangible indicator of the presence or absence of a culture of assessment. 18 Limitations A critical awareness of my positionality as a former faculty member and current assessment professional and campus administrator informed my interpretation of the study’s data and emerging design. In this context, the following limitations should be considered. • The nature of a dissertation is time-bound, necessitating reliance on abstracts as a primary tool for selection decisions; some relevant texts may have been excluded due to poorly written abstracts. • Terminology and jargon added complexity to the search process. Specialized language use may have excluded relevant literature from the search results. • The study emphasized undergraduate students in higher education in the United States. Studies of populations outside these criteria were excluded, so a more limited perspective is presented in the findings. • A team of researchers typically completes qualitative meta-syntheses and systematic reviews. Although I took steps to maintain objectivity, I may have excluded sources that other researchers might have included. • Formal quality appraisal of the studies selected for analysis was not part of the process. The rationale for this decision is addressed in chapter three. • The SPIDER search tool tends to result in fewer results unless the search heading combinations are modified. This may lead to missed or overlooked relevant studies in the search process (Cooke et al., 2012; Methley et al., 2014). 1 9 Delimitations The study identified and examined existing empirical studies focused on evaluating the impact of cultures of assessment on student learning outcomes. The dataset included studies where faculty development emphasizing pedagogy and assessment was the primary research focus. The study was limited to post-2013 empirical research, assuming that Kezar’s (2013) critical work on problems in studies of cultures of assessment may have inspired a broader exploration of this concept. Significance of the Study Critically examining whether cultures of assessment improve student learning outcomes may offer greater insight into the value of student learning assessment from programmatic and institutional standpoints and how these practices might be improved. Institutions continue to struggle with the dichotomy of assessment for learning and assessment for compliance, with some institutions finding more success than others. As the assessment movement continues to grow and impact higher education, it is necessary to examine some of the commonly held beliefs about the value of assessment, specifically the pervasive belief that a culture of assessment has an impact on student learning (Banta & Palomba, 2015; Fuller et al., 2016). Further, as more recent scholarship on the effects of faculty development programming on creating cultures of assessment posits, it is vital to understand whether and how faculty development programming impacts establishing and maintaining an effective culture of assessment (Eubanks & Fulcher, 2021). The study results may support ongoing scholarly conversations about the value of student learning assessment in higher education and interrogation of compliance-based understandings of 2 0 the nature of learning. The study results may also provide an alternative view of organizational cultures in higher education, potentially influencing future practice. Chapter One Summary The assessment movement in higher education has not led to the conclusions proponents have sought. Questions like, “What are students learning?” and “How well are they learning it?” may seem straightforward. Still, confusion persists about assessment's purpose, value, and even definition (Bresciani Ludvik, 2019). Assessment cultures are presented as the ideal environment where data-driven decisions will flourish. However, a critical view of whether and how the organizational structures thought to comprise a culture of assessment impact student learning outcomes appears lacking. The literature review in chapter two provides a broad scope of background information related to learning, the assessment movement, and faculty development to offer the necessary context for the study. Chapter three provides an in-depth discussion of the study’s methodology and research design. Chapter four supplies the meta-synthesis of the studies selected for final analysis. Chapter five discusses the findings and recommendations for practice. 2 1 CHAPTER TWO REVIEW OF THE LITERATURE Introduction To frame cultures of assessment as a construct for a meta-synthesis and examine the role that faculty development may play in student learning outcomes assessment, I approach this review of the literature as an opportunity to offer foundational information about several aspects related to student learning outcomes assessment, and therefore cultures of assessment, in higher education. I begin the discussion with a brief exploration of the construct of cultures of assessment to provide a conceptual basis for the remainder of the review. This is followed by an overview of theories and concepts of learning, both generally and in postsecondary education, to codify the conceptual basis of assessing student learning outcomes. I then offer a discussion of the construct of continuous improvement, including the concept of program assessment as one method of evaluating institutional effectiveness. Program assessment is also a significant concern for regional accreditation and often requires faculty and administrators to think more holistically about curricula and student learning beyond discrete course objectives. Next, I discuss concerns about student learning assessment and its role in higher education. Finally, I examine concepts of faculty development related to improvements in teaching, learning, and assessment, including the Scholarship of Teaching and Learning (SoTL) as an inquiry framework that can encourage scholarly approaches to assessment and support professional development for faculty. Through this discussion, I offer the reader a broad overview of constructs that may contribute to what is perceived to be a successful learning 2 2 or assessment culture. These constructs informed the search and selection process employed in identifying studies relevant to the research questions. Cultures of Assessment The construct of an institutional culture of assessment is not a recent development. Establishing an institutional “pervasive, enduring culture of evidence and betterment” (Suskie, 2018, p. 118) is a standard recommendation in the institutional effectiveness literature. The term “organizational culture” might be understood as the expressed and embedded values and beliefs held by members of an organization (Lakos & Phipps, 2004; Peterson & Vaughan, 2002). Extending this understanding to the ideal culture of evidence and betterment implies an institutional commitment to using multiple types of evidence to inform decisions and ensuring that assessment work is an integral part of improving student learning, as well as institutional operations and governance (Kinzie & Jankowski, 2015; Suskie, 2018). Peterson and Vaughan (2002) also included institutional climate in their discussion of institutional cultures, noting that the institutional climate indicates stakeholders’ perceptions of assessment practices and organizational structures supporting (or hindering) assessment work. Subsequent discussions of assessment cultures have involved definitions ranging from an exclusive focus on student learning to an emphasis exclusively on the evaluation process (Kezar, 2013). These varied perspectives, coupled with competing objectives for assessment, whether for improvement or compliance (Ewell, 2009; Ewell & Cumming, 2017), can contribute to efficacy or dysfunction in institutional cultures concerning student learning assessment. Wiener (2009) outlined fifteen concrete elements necessary to achieve a culture of assessment: general educational goals, common use of assessment terms, faculty ownership, 2 3 ongoing professional development, administrative support and understanding, practical, sustainable assessment plan, systematic assessment, student learning outcomes, comprehensive program review, assessment of co-curricular activities, institutional effectiveness, information sharing, planning and budgeting, the celebration of success, and new initiatives. Whether partially or wholly, these elements have been consistently reflected in the assessment literature as critical components of an effective assessment process. Many are visible in regional accreditors’ expectations for assessing student learning (Banta & Palomba, 2015; Provezis, 2010). Maki (2010) identified three principles of an inclusive institutional commitment to evaluate student learning, combining and echoing Weiner’s (2009) elements. The critical aspect of these frameworks, and similar others, is that the emphasis is clearly on assessment for improvement and student learning, not compliance. One component of Weiner's (2009) and Maki’s (2010) frameworks that receives frequent attention is faculty involvement in and ownership of the assessment process. The recommendation to create conditions for faculty engagement with outcomes assessment is logical, given that the assessment feedback loop is firmly centered in the classroom. Many institutions have struggled to create these conditions, undermining any institutional commitment to establishing a culture of inquiry and a meaningful outcomes assessment process (Hutchings, 2011; Kuh et al., 2015; Maki, 2010; Ndoye & Parker, 2010; Welsh & Metcalf, 2003). Faculty perceptions of assessment directly impact the establishment of a culture of assessment. Skidmore et al. (2018) identified a typology into which faculty may separate based on how they perceive assessment’s purpose at their institution. In this framework, a culture of assessment was defined as “groups of people residing within institutional contexts that support or 2 4 hinder the integration of professional wisdom with the best available assessment data to support improved student outcomes or decision making” (Skidmore et al., 2018, p. 1244). In this four- model framework, respondents categorized as belonging to a culture of student learning saw the improvement of student learning as the primary reason for conducting assessment. These faculty were more likely to ascribe value to institutional assessment practices, supporting the concept of a learning organization or culture of learning over compliance (Bresciani Ludvik, 2019). Although faculty perception of assessment is significant in establishing an inquiry-based culture of learning (Bresciani Ludvik, 2019), faculty cannot be held solely responsible for creating a functional organizational culture. Cultures of assessment for learning and faculty engagement in these cultures rely on organizational leadership, climate, language, and context (Kuh et al., 2015; Lakos & Phipps, 2004; Maki, 2010; Weiner, 2009). While stakeholder concerns and issues related to outcomes assessment are addressed later in this literature review, it is critical to acknowledge two overarching concepts related to cultures of assessment. First, the construct of a culture of assessment is deeply embedded in the literature, guidance, and expectations associated with assessing student learning and institutional effectiveness (Fuller, 2011; Provezis, 2010; Suskie, 2018). Second, the importance of higher education institutions functioning as learning organizations (including establishing effective cultures of assessment) cannot reasonably be disputed. Higher education aims to provide meaningful and transformational learning experiences to students (Bresciani Ludvik, 2019; Ewell, 2009; Felten et al., 2016; Mezirow, 1994, 1997). Committing time and resources to study organizational performance is necessary to understand where and how improvement needs to occur (Bresciani Ludvik, 2019; Huba & Freed, 2000). 2 5 Despite the seemingly fractured landscape of student learning outcomes assessment in higher education, it may be valuable to acknowledge a position guiding this review of the literature and the study. As Fuller (2011) has pointed out, While the focus of scholarly discourse has turned to forms of assessment (i.e. the usefulness of survey or tests in higher education), the usefulness of various forms of assessment will never be fully realized without a comprehensive understanding of the contexts in which assessment operates; that is, an institution’s culture of assessment. (p. 6) Fuller’s (2011) argument is not that assessment cultures within a learning organization are not valuable or necessary. Instead, the study seeks to explore the widely accepted belief that establishing a culture of assessment will have a direct, positive impact on student learning outcomes. In examining this belief, it should be acknowledged that the cultures of assessment being explored are those of organizations where student learning is purported to be at the center of assessment activities (Bresciani Ludvik, 2019). What is Learning? The dichotomous landscape of student learning assessment in higher education (assessment for improvement or accountability) merits a brief discussion about the nature of learning itself. A widely accepted standard definition of learning does not exist, as learning occurs for different purposes in multiple contexts, including cognitive, behavioral, motor, and social domains (Alexander et al., 2009; Schunk, 2020). To define learning, one must consider the learners, the context, the purpose, and the goals of the learning experience (Alexander et al., 2009; Wilson & Peterson, 2006). Novak and Gowin (1984) asserted that learning is “a change in the meaning of an experience,” not a change in behavior (p. xi). In their definition, the authors differentiated 2 6 between learning and training, in which desired behaviors are developed (Novak & Gowin, 1984). The salient point of Novak and Gowin’s (1984) definition is its emphasis on facilitating learning experiences that encourage meaning-making. Schunk’s (2020) definition offered a seemingly conflictive position: “Learning is an enduring change in behavior, or in the capacity to behave in a given fashion, which results from practice or other forms of experience” (Schunk, 2020, p. 3). A similar definition was also shared by Olson and Hergenhahn (2012). Rather than concluding that Schunk’s (2020) position is in direct opposition to Novak and Gowin (1984), comparing these two definitions supports the conclusion that learning appears differently given the purpose or context of the learning experience (Alexander et al., 2009; Wilson & Peterson, 2006). Further, it may be helpful to consider varied understandings of the word “behavior.” Schunk (2020) asserted that learning is inferential; thus, we can only observe its products (i.e., behaviors). On the other hand, Novak and Gowin (1984) categorized “behaviors” with rote learning that lacks meaning and context for the learners. To better understand how these viewpoints may be similar, distinguishing between learning and performance (Bjork & Bjork, 2011) may offer some insight. The distinction between learning and performance is that current performance is not a valid or consistent measure of long-term learning (Bjork & Bjork, 2011). Acknowledging the role of memory in learning (Bjork & Bjork, 1992) is critical to understanding what learning is and how it differs from performance. Acknowledging that human memory does not work in the same way a computer’s hard drive functions offers a deeper understanding of learning. Bjork and Bjork (1992, 2011) identified two memory characteristics: storage strength and retrieval strength. Storage strength (Bjork & Bjork, 1992) refers to how new information integrates with previous 2 7 knowledge and schema extant in long-term memory. Retrieval strength (Bjork & Bjork, 1992) refers to how easily information can be retrieved and used from memory. While both characteristics are meaningful, performance relies primarily on retrieval strength and is not thought to be a reliable indicator of actual learning or storage strength (Bjork & Bjork, 1992, 2011; Karpicke & Roediger, 2008). Novak and Gowin’s (1984) definition of learning reflected a constructivist lens that sought to emphasize meaningful learning evidenced by increased storage strength (Bjork & Bjork, 1992) of the concepts and ideas gained through educational experiences. Similarly, Schunk’s (2020) definition of learning advocated for change through practice and experiences, both of which can increase storage strength and when used appropriately, can also support retrieval (Bjork & Bjork, 1992). Perhaps by drawing on both Novak and Gowin’s (1984) and Schunk’s (2020) definitions, as well as Bjork and Bjork’s (1992, 2011) research on memory, learning might be characterized as a dynamic, complex process involving a combination of interactions and experiences that support meaning-making, culminating in a lasting change in behavior or knowledge. This proposed definition may further exemplify the dichotomous assessment landscape in higher education. If learning is contextual and complex, assessment practices focused on limited metrics (e.g., standardization of summative assessments) for accountability purposes may not offer adequate information about the true nature of institutional effectiveness or authentic student learning (Boud, 1990). 2 8 Learning in Postsecondary Education To fully explore the construct of assessment cultures, it is necessary to recognize the nature of learning in higher education. Contextualizing learning in higher education as a dynamic, complex process involving interactions and experiences that support meaning-making, culminating in a lasting change in behavior or knowledge, may be thought of concisely as being learner-centered. Learner-Centered Environments Learner-centered teaching and learning environments require learners to be active participants (Alexander et al., 2009; Ausubel, 2000; Novak & Gowin, 1984; Schunk, 2020), engaged in constructing new knowledge, building on previous knowledge (Ausubel, 2000; Erickson, 2001), and wrestling with “desirable difficulties” (Bjork & Bjork, 2011). Instead of passively reacting to learning experiences (Fink, 2013), learners who participate in interactional experiences (Alexander et al., 2009) that challenge the construction of knowledge can experience deep learning and desired change. These interactional experiences might include varied practice conditions, spaced practice sessions, integrative instruction, and instructional testing (Bjork & Bjork, 2011; Karpicke & Roediger, 2008). Because learning is “personal and idiosyncratic” (Novak & Gowin, 1984, p. 5), effective learning experiences should acknowledge the whole person of the learner, including their thoughts, actions, emotions, and experiences (Erickson, 2001; Fink, 2013; Maslow, 1970; Miller, 2006; Novak & Gowin, 1984). The goal of learner-centered learning should be to support individuals in developing and interrogating their interpretation of information and experiences (Erickson, 2001; Miller, 2006). Further, in acknowledging the whole person, learning should 2 9 emphasize our humanness, ensuring that basic needs are met (Maslow, 1970). Learning experiences should also support opportunities to interact with others to co-create meaning, thereby constructing a unique understanding of the world and oneself (Erickson, 2001; Fink, 2013; Miller, 2006). Learning is contextual and personal (Alexander et al., 2009; Fink, 2013), so meaning must come from opportunities to connect existing knowledge and new concepts (Ausubel, 2000; Erickson, 2001). Egan (2003) expanded on this idea, not advocating for ignoring the principle of beginning with what students already know but instead encouraging educators to consider what students can imagine. Effective learning should guide learners toward developing enduring understandings, and finding opportunities for integration, growth, and application (Erickson, 2001; Fink, 2013). Adult Learning Models A critical component of teaching and learning in higher education is the integration of theories of adult learning (Bélanger, 2011). The application of adult learning theories, specifically andragogy (Knowles et al., 2005), is characterized by the adult learner’s personal or contextual need for information and intrinsic motivation (Knowles et al., 2005). Maslow’s (1970) theory of human motivation based on a hierarchy of needs is foundational to the humanist theory of learning, which centers on the instructor as a learning facilitator (Bélanger, 2011). Kolb’s (1984) experiential learning theory emphasized context and learner-centered approaches. Recognizing the learner's individual and developmental needs is a critical component of effective postsecondary learning (Knowles et al., 2005; Kolb, 1984; Maslow, 1970). 3 0 In recognizing the adult learner's individual needs, learning experiences should holistically consider the affective, cognitive, and physical domains (Bélanger, 2011; Bloom, 1956; Fink, 2013). In doing so, learners should have significant learning experiences (Bélanger, 2011; Fink, 2013). Bélanger (2011) described significant learning experiences as providing learners with the opportunity to create connections between their prior knowledge or experiences and their present reality. Fink (2013) developed a taxonomy of significant learning comprising six categories: foundational knowledge, application, integration, human dimension, caring, and learning how to learn. Ensuring that learning goals reflect the taxonomy of significant learning experiences supports more resonant, holistic opportunities for learners to grow and develop in a way that is personally and professionally relevant to their context (Fink, 2013). Another critical component of effective postsecondary learning is its transformative nature. While Ausubel’s (2000) assimilation theory posits that meaningful learning occurs when new information is connected to prior knowledge and existing knowledge structures, Mezirow’s (1994) transformation theory leads to a change in knowledge structures through reflective action (Bélanger, 2011; Mezirow, 1994). Viewed another way, transformative learning results in a change in a “frame of reference […] composed of two dimensions: habits of mind and a point of view” (Mezirow, 1997, p. 5). By facilitating significant learning experiences (Fink, 2013) that are relevant to the adult learner’s personal, professional, cognitive, and developmental needs and goals (Knowles et al., 2005; Maslow, 1970), educators can create learning conditions leading to the construction and application of new ways of knowing (Mezirow, 1994, 1997). 3 1 Assessment Practices Learning should also involve multiple opportunities emphasizing assessment for learning (Erickson, 2001; Fink, 2013; Wiliam, 2011). Wiliam (2011) identified assessment as a significant aspect of effective instruction. Assessment for learning, which some may think of as formative assessment, involves design to improve students’ learning, including offering valuable and timely feedback to students and using assessment results to improve instruction and student self-assessment (Huba & Freed, 2000; Wiliam, 2011). These goals for assessing student learning may be seen as at odds with the federal government's accountability and quality assurance mission and, by extension, regional accreditors (Provezis, 2010; Shavelson, 2007). As Shavelson (2007) pointed out, learning can be assessed in different ways for different purposes, and even using standardized instruments is not a simple answer to a complex problem. While regional accreditors do not present to institutions prescriptive requirements regarding student learning assessment, there is a common expectation that institutions will collect evidence of student learning outcomes to improve teaching and learning, as well as institutional effectiveness (Provezis, 2010). Although this expectation is not inherently problematic, failure to meet these compliance standards is overwhelmingly common, despite years of continuous improvement efforts for institutions and their accreditors (Eubanks & Fulcher, 2021; Provezis, 2010). Continuous Improvement Student learning outcomes assessment and continuous improvement are closely and critically tied. In describing the origins of the continuous improvement movement, Huba and Freed (2000) explained that the integration of quality improvement in higher education occurred 3 2 in line with the assessment movement. Deming’s (1986) Fourteen Points for Continuous Improvement has been credited as the most recognized framework for shaping the continuous improvement movement in education. Making data-informed decisions is one of the primary tenets (as cited in Huba & Freed, 2000) and requires goal-oriented and learner-centered assessment practices. Reflecting on the assessment movement, Ewell (2009) described two competing frameworks from which assessment can be viewed: an accountability paradigm and an improvement paradigm. In the accountability paradigm, the goal is institutional compliance and assessment to satisfy external entities (Ewell, 2009; Felten et al., 2016). In the improvement paradigm, the goal is to seek continuous improvement through evidence. When we consider the concept of continuous improvement as an integral part of student learning assessment, we can see that mere compliance does not support the transformative charge of education to provide quality learning experiences to all students (Bresciani Ludvik, 2019; Ewell, 2009; Felten et al., 2016; Mezirow, 1994, 1997). Felten et al. (2016) wrote that “assessment as improvement is a key to student and institutional effectiveness” (p. 119). In this paradigm, to achieve the goal of meaningful, evidence-driven changes, institutions must first articulate what matters (e.g., through learning outcomes) and then document progress toward reaching those goals, systematically applying the results (Banta & Palomba, 2015; Bresciani Ludvik, 2019). In a continuous improvement system, assessment never ends, and results are used for decision-making and improvement (Brown et al., 2018; Ewell & Cumming, 2017). Blaich and Wise (2011) have described continuous improvement in the context of assessment by noting that "assessment data has legs only if the 3 3 evidence collected rises out of extended conversations across constituencies about (a) what people hunger to know about their teaching and learning environments and (b) how the assessment evidence speaks to those questions" (p. 12). Assessment does not need to be complicated, as evidenced by Fulcher et al.’s (2014) learning improvement model emphasizing re-assessment: “weigh pig, feed pig, weigh pig” (p. 5). Assessment and continuous improvement, when done effectively, may shape individual experiences, curricula, programs, and institutions (Coates, 2015; Volkwein, 2003). Establishing institutional continuous improvement systems can be complicated for a variety of reasons, including a lack of knowledge of how to apply assessment results, a lack of understanding of effective data collection methods, and a perception of compliance as assessment’s primary purpose (Banta & Palomba, 2015; Coates, 2015; Ewell, 2009; Hutchings et al., 2015). It is up to institutional leaders to support continuous improvement by ensuring faculty and staff have access to professional development opportunities to build assessment skills (Grannan & Calkins, 2018; Hutchings et al., 2013). Institutional leaders should be familiar with the concepts and purposes of assessment and institutional improvement to lead these efforts (Bresciani Ludvik, 2019; Cardoso et al., 2016). As Fulcher et al. (2014) have pointed out, change is not the same as improvement. Making changes to programs or curricula is meaningful but failing to measure the effect of the changes impedes the opportunity to recognize and document actual improvement. Ultimately, institutions need to develop assessment systems that help the people involved understand, evaluate, and act upon student learning and other issues tied to institutional effectiveness (Coates, 2015; Felten et al., 2016). Functioning within an improvement paradigm 3 4 (Ewell, 2009), continuous improvement should be understood as an exercise in curiosity and self-reflection (Bresciani Ludvik, 2019; Volkwein, 2011), taking action to satisfy one's interest, then sharing and using the results to make changes to practice, policy, and programming. More recent approaches to continuous improvement advocate for situating such work as a person- centered endeavor, potentially leading to a more robust culture of teaching, learning, and improvement (Bresciani Ludvik, 2019; Leaderman & Polychronopoulos, 2019). Programmatic Assessment of Student Learning Defining Program Assessment Institutional continuous improvement practices, including efforts to build a “culture of evidence and betterment” (Suskie, 2018), are evidenced primarily through program-level assessment, although terminology in the assessment field is inconsistent, whether discussing assessment broadly, identifying learning goals, or defining program assessment (Bresciani Ludvik, 2019; Suskie, 2018; Walvoord, 2010). Banta & Palomba (2015) noted that the “concept of assessment resides in the eye of the beholder […] it is essential that anyone who writes or speaks about assessment defines it at the outset” (p. 1). Therefore, it is necessary to consider a variety of definitions for program assessment before examining its role in institutional effectiveness initiatives. In their definition of assessment, Huba and Freed (2000) emphasized gathering and using evidence from various sources to understand how learning experiences have impacted student learning. Huba and Freed (2000) focused on a learner-centered paradigm, asserting that the assessment process “is fundamentally the same at all levels, although the focus, methods, and interested parties may change somewhat from level to level” (p. 8). Walvoord (2010) took a 3 5 similar approach, defining assessment as “the systematic collection of information about student learning, using the time, knowledge, expertise, and resources available, in order to inform decisions that affect student learning” (p.2). Notably, Huba and Freed (2000) and Walvoord (2010) do not distinguish program assessment as a concept separate from assessment conceived more broadly. Suskie’s (2018) definition of program assessment specifically references evaluating “how well all students in a program have achieved program-level learning goals” (p. 17). Although Suskie (2018) did differentiate between assessment as a broad concept and program assessment, this definition remains focused on student learning. Banta and Palomba (1999, as cited in 2015) initially emphasized student learning in program assessment, defining it as the “systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development” (p.p. 1-2). Banta and Palomba (2015) then expanded their definition to address the gathering and providing evidence of “resources, implementation actions, and outcomes undertaken for the purpose of improving the effectiveness of instruction, programs, and services” (p. 2). Instead of focusing primarily on student learning, the authors saw assessment as a broader term related to institutional effectiveness. Jankowski and Marshall (2017) and Bresciani Ludvik (2019) defined outcomes-based program review in terms similar to Banta and Palomba’s (2015) revised definition, situating program assessment as a reflective inquiry process that includes all stakeholders as co-creators of programmatic learning goals, both related to student learning and the broader functions of the program. Based on these related but nuanced perspectives, program assessment might be understood as a holistic system focused primarily on student learning, reflective inquiry, the 3 6 involvement of multiple stakeholders, and taking action on evidence (Bresciani Ludvik, 2019; Jankowski & Marshall, 2017). Systems Perspective Approaching assessment from a systems perspective involves faculty coming to a consensus on what essential learning should look like within their programs and developing meaningful programmatic learning goals, viewing the program holistically instead of a series of courses disseminating content knowledge (Biggs, 1996; Huba & Freed, 2000; Jankowski & Marshall, 2017). Suskie (2018) and Walvoord (2010) have also suggested assessing student learning by selecting specific points in time during the program and choosing programmatic assessment tools to measure student learning. Cohesive curriculum design would involve identifying multiple assessment levels of program learning goals (e.g., introductory, reinforced, proficient), as well as careful alignment of courses and program goals, ensuring that the learning taking place in courses reflects what students should be able to demonstrate upon completing the program and that students understand the big picture of the scholarship in which they are engaged (Biggs, 1996; Jankowski & Marshall, 2017; Suskie, 2018). Further, ensuring flexibility in assessment methods, reporting schedules, and report formats is thought to help faculty engage more fully and personally with the assessment process (Suskie, 2018; Walvoord, 2010). While these are practical suggestions that support reporting of student outcome data, the opportunity to reflect on one's teaching practice should also be considered, situating student learning assessment as a form of inquiry (Huba & Freed, 2000; Suskie, 2018; Walvoord, 2010). If the goal is to give students the best possible education (Huba & Freed, 2000; Suskie, 2018), then using an assessment system that is personally relevant to 3 7 faculty can offer more relevant feedback on what is working and what is not. Ultimately, as Suskie (2018) has pointed out, “It is not the assessment itself, but how faculty, staff, and college leaders use it that can lead to improvement in student learning” (p. 87). Classroom Assessment While there are numerous recommendations for high-quality program assessment practices, one critical factor is the importance of constructing assessment goals relevant to the program (Walvoord, 2010). In doing so, program faculty can build on the work already taking place in the classroom, including identifying areas that are working well and that require improvement while also keeping students at the center of the assessment process (Suskie, 2018; Walvoord, 2010). Rather than seeking only to comply with institutional and external assessment mandates, identifying a personally (or programmatically) relevant reason for assessment can lead to more helpful information with which to support student learning (Suskie, 2018; Walvoord, 2010). Assessment of student learning at the program level requires effective assessment techniques in the classroom (Huba & Freed, 2000). Because learning ultimately depends on the needs and motivations of the learner (Maslow, 1970), constructivist and humanistic theoretical principles should drive teaching and learning, as both theories are student-centered and situate the instructor as a facilitator rather than gatekeeper or bestower of knowledge (Bélanger, 2011; Knowles et al., 2005; Maslow, 1970). Ideally, learning at the course level should focus on significant learning experiences where learners can contextualize and make meaning of the information presented (Fink, 2013) and integrative conceptual learning (Ausubel, 2000; Erickson; 2001). Students should be able to see and form connections between content, courses, 3 8 program requirements, and their individual contexts (Bélanger, 2011; Knowles et al., 2005). Emphasizing enduring understandings helps students connect these concepts to their lives and imagined possibilities (Ausubel, 2000; Egan, 2003; Erickson, 2001; Fink, 2013). This model for teaching and learning puts the learner at the center of the assessment process. Concerns About Assessment Although much assessment literature points to the vital role faculty play in the assessment of student learning outcomes (Kezar, 2013), a cultural mismatch between the “assessment for improvement” and “assessment for accountability” models (Ewell, 2009; Ewell & Cumming, 2017) has placed faculty at the center of continued debate regarding the efficacy of student learning assessment. While most college instructors care deeply about helping their students learn (Bok, 2013), faculty have been characterized as being resistant to participating in assessment practices, with advice to assessment practitioners on how to engage faculty abounding in assessment literature (e.g., Andrade, 2011; Banta & Palmoba, 2014; Suskie, 2018; Walvoord, 2010). That resistance, or perhaps apathy toward assessment, may occur for multiple reasons. A New York Times opinion piece written by a professor at a large state university offered numerous criticisms of the assessment movement, highlighting systemic problems in American higher education and the proliferation of rigid and bureaucratic processes (Worthen, 2018). While this article was one faculty member’s opinion, it highlights the frustration and confusion potentially felt by higher education faculty as they work to navigate the landscape of teaching and learning, as well as compliance and accountability expectations (Bowker, 2016; Cardoso et al., 2013; Cardoso et al., 2016). Concerns that student learning is not reflected in assessment reporting and potentially 3 9 misrepresenting student learning data to avoid being penalized (Boud, 1990), as well as a traditional lack of focus on evidence-based teaching practices at the post-secondary level (Bok, 2013), may contribute to the problem of faculty engagement with assessment. How assessment has been discussed, both in literature and within institutions, further highlights potential misalignment between instructional practices and compliance mandates. Situating assessment and continuous improvement as an integrative practice benefitting all involved requires avoiding bureaucratic, compliance-driven procedures, language, and perspectives (Ewell, 2009; Felten et al., 2016). In considering the role of language in building a culture of assessment and continuous improvement, issues of power and control must be acknowledged. Foucault (1977, 1978) theorized that power is not hierarchical but is distributive, although not equally so, within relationships (as cited in Day, 2012). Wiliam (2017) has argued that the use of assessment implies the existence of power. While educators may agree on what assessments should look like and how they should be implemented, the use of the results can be problematic, resulting in a system of rewards and punishments, and unevenly distributing power (Wiliam, 2017). From the federal level to the classroom level, discussions about assessment frequently imply a deficit that requires remediation, potentially leading to faculty resistance to participating in assessment practices (Bowker, 2016; Cardoso et al., 2013) because of an observed imbalance of power. Perceptions of administrative control realized through bureaucratic, management-centered language (Cardoso et al., 2013; Cardoso et al., 2016) imply a lack of faculty agency, potentially leading to fear and distrust (Bowker, 2016; Cardoso et al., 2013; Cardoso et al., 2016). 4 0 In analyzing terminology used to describe a program assessment cycle, Bowker (2016) identified a variety of terms using “unnecessarily formal, bureaucratic, legalistic or negative [language] for which more straightforward and collegial options exist” (p. 185). Bowker (2016) further identified terminology using more subtle, power-based language, which again could be rephrased using more collegial language. The semantics of the language used in the jargon recognized by Bowker (2016) supports Cardoso et al.'s (2013) argument that faculty find assessment language unfriendly, bureaucratic, and antithetical to the human-oriented values of higher education. Poole (2010) has identified similar issues with the term “quality assurance,” positing that it is a source of concern to academics due to the input/output connotation commonly ascribed to corporate and industrial settings. If faculty in higher education primarily view education as a transformational, learner- centered endeavor through which learners construct meaning relevant to their needs and individual contexts (Knowles et al., 2005; Maslow, 1970; Mezirow, 1994, 1997), the connotation of the language used to describe assessment (particularly compliance-based and input/output vocabulary) may be seen as in opposition to the work in which faculty engage (Poole, 2010). In the context of dysfunctional assessment and continuous improvement systems, the issues of power/control and fear or mistrust coming from the assessment movement's conventional language must be addressed (Bowker, 2016; Cardoso et al., 2013; Cardoso et al., 2016). Because faculty and institutions are expected to engage in assessment and report the results of that work, avoiding compliance-oriented language (Poole, 2010) may be one method to increase trust between institutional leadership and faculty. By examining the impact language has on assessment processes, Bowker (2016) identified improved faculty participation in 4 1 assessment training and activities after targeted semantic shifts in assessment-related communication were implemented. Effective organizational communication models can mitigate some of the fear and resentment from assessment mandates. In contrast, weak institutional communication between assessment leaders and faculty has been identified as a significant obstacle to developing a faculty-driven assessment culture (Cardoso et al., 2016). Communicating persuasively through carefully chosen language to faculty about the value of their work and how that work makes a difference for the institution (Grannan & Calkins, 2018) can “serve as a countertactic to help restore the balance of power” (Bowker, 2016, p. 190). For example, Suskie (2018) advocated for changing terminology from the concept of creating a culture of assessment to creating a “culture of evidence and betterment” (Suskie, 2018, p. 118). This semantic modification may help assessment practitioners and stakeholders emphasize the use of assessment evidence rather than the process of "doing" assessment (Suskie, 2018). Shifting language choice around assessment may seem trivial, but carefully choosing the right words and connotations can make a difference in collegiality and institutional effectiveness efforts. As much of the conversation about student learning assessment centers on faculty and instructional practices, academic leaders’ roles may not be interrogated sufficiently. Broadly, leadership plays a significant part in creating and sustaining organizational culture, but few empirical studies clearly define leadership in relation to cultures of assessment (Kezar, 2013). If one defines academic leadership as those individuals in a position of authority responsible for driving a process and vision (Bok, 2013; Kezar, 2013), those individuals’ ability to communicate with faculty, staff, and external stakeholders in a way that “cultivates human flourishing by 4 2 acknowledging human contributions” (Bresciani Ludvik, 2019, p. 155) is crucial to institutional continuous improvement efforts. In Leaderman & Polychronopoulos’s (2019) RARE model, academic leaders are encouraged to move beyond administrative directives, offering faculty and staff the opportunity to co-create assessment policies and practices (Ikenberry & Kuh, 2015; Leaderman & Polychronopoulos, 2019). Assessment leaders are encouraged to “Relate,” “Acknowledge,” “Reflect,” and “Empower” faculty through relationship building, constructive feedback, and empathy (Leaderman & Polychronopoulos, 2019, p.p. 33-36). Further, models for communication with internal and external stakeholders, such as the National Institute for Learning Outcomes Assessment (NILOA) (2011) Transparency Framework and evidence-based storytelling (Jankowski & Baker, 2019) offer an opportunity for transparent communication and work toward ensuring the enduring success and relevance of assessment. Literature discussing the growth of assessment cultures may highlight faculty development, but critically examining academic leadership’s professional development and preparation in student learning assessment is another concern. Bresciani Ludvik (2019) has identified barriers to meaningful program assessment, including a lack of understanding of assessment and a shortage of resources. Because institutional leaders are ultimately responsible for setting the institution's cultural expectations and tone (including a culture of continuous improvement), they must understand assessment practices and demonstrate the capacity to allocate resources to support them (Bresciani Ludvik, 2019; Cardoso et al., 2016). Institutional leaders may not have these competencies, as their roles, experience, and expertise are unique and individualized. Poole (2010) observed that some institutional leaders (e.g., department heads) might function in a dual role as both faculty and administrator, while other academic leaders are 4 3 former faculty. Because not all faculty have training in research methodology, data collection, or analysis (Banta & Palomba, 2015; Boyer, 1990; Fink, 2013), and some institutional leaders may not have been trained as teachers, administrators potentially have gaps in their competencies to lead continuous improvement efforts (Bok, 2013). Highlighting this point, Bresciani Ludvik (2019) explained that “we cannot expect faculty, staff, and administrators to engage in this inquiry process without first teaching them how” (p. 142). Beyond the need for foundational understandings of assessment and continuous improvement, academic leaders must know how to allocate the resources required to support and maintain an effective assessment process. A significant portion of the assessment budget, financially and timewise, should be spent to apply the data gathered, acting toward change and improvement (Banta & Blaich, 2011). Resources should be directed to professional development, institution-sponsored facilitated conversations analyzing data, creating plans to implement change, supporting those change efforts, and recognizing faculty and staff assessment efforts (Banta & Blaich, 2011; Grannan & Calkins, 2018; Higher Education Assessment Practitioners, 2018). Finally, although it may call their own beliefs and assumptions into question (Banta & Blaich, 2011; Suskie, 2018), institutional leaders must have the capacity to lead assessment efforts through effective communication and collaborative practice, effectively setting the tone and expectations for a culture of assessment. Kezar (2013) concluded that internal factors within institutions more heavily influence assessment programs' effectiveness than the pressure coming from outside forces (e.g., regional accreditors). As many faculty continue to see assessment as a bureaucratic administrative mandate (Banta & Blaich, 2011; Cardoso et al., 2016), administrators 4 4 at all levels must demonstrate that they value student learning rather than the accumulation of data (Grannan & Calkins, 2018; Roscoe, 2017; Stanny, 2018; Suskie, 2018). Ewell (1997) summarized the imperative for academic leadership development, stating, “Change requires conscious and consistent leadership. Experience in organizational transformation emphasizes the role that top administrators must play as ‘leading learners’" (para. 31). To be the type of leader Ewell (1997) described, academic leaders must pursue professional development to support their institutions, faculty, and staff in creating learner-centered continuous improvement cultures. Faculty Development in a Culture of Assessment Accountability initiatives and mandates require faculty and institutions to articulate how well students attain learning outcomes, so ensuring that faculty engage in effective teaching and learning practices is essential (Ouellett, 2010; Roscoe, 2017; Wehlburg, 2010). Further, the expectation that faculty will become well-versed in current instruction and assessment methods necessitates development programs (Ouellett, 2010). Faculty development programs and centers can support faculty in teaching more effectively, including building and refining instructional practice, engaging in curriculum alignment, and assessing student learning (Burnstad & Hoss, 2010; Hutchings et al., 2013; Wehlburg, 2010). Biggs (1996) pointed out that when discussing the idea of effective teaching and learning, there is an assumption that faculty have articulated their expectations of student learning and that they employ evidence-based practice leading students to attain the expectations set out in learning goals. However, this assumption may not be valid, as not all faculty have been trained to be teachers (Banta & Palomba, 2015; Boyer, 1990; Fink, 2013; Smith, 2001), and the 4 5 requirements to teach in higher education vary by institution type (Burns, 2017; Burnstad & Hoss, 2010). Faculty development centers can offer training and support to help bridge gaps in faculty experience or expertise in teaching and learning (Wehlburg, 2010). Effective faculty development programs should honor the faculty’s existing expertise and knowledge while also supporting faculty in developing practices that both facilitate student learning and approach teaching from an inquiry-based perspective, creating change at the classroom, program, and institutional levels (Dickson & Treml, 2013; Fink, 2013; Hutchings et al., 2013; Wehlburg, 2010). Because student learning assessment involves a process of inquiry and requires collection and evaluation of data (Huba & Freed, 2000; Bresciani Ludvik, 2019; Maki, 2010), institutions should ensure that faculty have adequate opportunity to “build capacities in assessment, curricular revision, teaching innovations, and scholarship of teaching and learning” (Grannan & Calkins, 2018, p. 19). While the assessment movement has demanded evidence of student success in attaining articulated learning outcomes (Roscoe, 2017), criticism that faculty fail to “close the loop” on student learning assessment endures (Banta & Blaich, 2011; Fulcher et al., 2014; Hutchings et al., 2015). Despite demands for evidence and accountability, aggregating, examining, and acting on student learning assessment data can be an unfamiliar process for many instructors (Banta & Blaich, 2011; Grannan & Calkins, 2018; Suskie, 2018). Faculty development in this area might seek to offer an intellectual challenge to faculty at all career levels (Angelo & Cross, 1993) by encouraging the development of individual inquiry processes. 4 6 Although the connection between institutional improvement and faculty development may not be readily apparent or acknowledged (Angelo & Cross, 1993; Burnstad & Hoss, 2010), an institutional commitment to faculty development is one potential remedy for dysfunctional, compliance-driven assessment models (Banta & Blaich, 2011; Grannan & Calkins, 2018; Walvoord, 2010). If faculty are to see themselves as assessment leaders, assessment must be perceived as a worthwhile expenditure of their time and offer flexibility in choosing what to assess and how to share the results (Banta & Blaich, 2011; Suskie, 2018; Walvoord, 2010). Collecting and examining direct evidence of student learning related to articulated learning goals is critical to effective assessment systems (Roscoe, 2017), and faculty should have access to professional development that supports that work. Because “systematic inquiry and intellectual challenge are powerful sources of motivation, growth, and renewal for college teachers” (Angelo & Cross, 1993, p. 10), some faculty may find more value and meaning in the assessment process if they can see it as a form of scholarship (Suskie, 2018) contributing to their professional growth and, by extension, institutional improvement. The Scholarship of Teaching and Learning (Boyer 1990; Shulman, 1993) is an inquiry framework that can be used to assess student learning and reflect on one's teaching practice, as well as examine problems of practice and how their work affects students (Boyer, 1990; Chick, 2018; Hutchings, 2000; Hutchings et al., 2013). Hutchings (2000) has discussed the “transformational agenda” (p. 8) of the Scholarship of Teaching and Learning (SoTL), a change practice reflective of education’s essential purpose. Faculty who can examine and reflect on an aspect of their professional classroom practice are better able to have conversations within their departments and across the institution, leading 4 7 to a more holistic understanding of “what it means to know deeply” (Hutchings, 2000, p. 7). While SoTL practices and results will vary based on the individual undertaking the research, having a personally meaningful research question or purpose in mind when examining student learning can lead to a deeper, more valuable understanding of one's practice as an educator and the impact it has on student learning (Poole, 2018). Sharing that understanding through the results of SoTL projects with colleagues within and outside of the department can lead to changes in practice and broader changes to curricula and policy, having a more significant impact on student learning. Encouraging faculty to create an understanding of the connections between the disciplines, teaching, learning, and assessment should be a primary goal for institutions (Angelo & Cross, 1993), and SoTL can support sharing that knowledge and the results of inquiry with colleagues within and outside the institution (Felten et al., 2016). Faculty development programs that emphasize SoTL can help faculty to see scholarly inquiry as a regular part of teaching and learning, supporting faculty in closing the assessment loop, applying results to classroom practice, and sharing the results of their work with others (Chick, 2019; Felten et al., 2016; Schwartz and Haynie, 2013). Chapter Two Summary This literature review serves as an overview of foundational concepts and information related to student learning assessment, institutional effectiveness, and the construct of cultures of assessment. This discussion was also intended to function as an advanced organizer for the study by providing these foundational concepts. The concepts, assumptions, and concerns identified in 4 8 the literature informed this study's search and selection process. The following chapter offers a discussion of the research methods employed. 4 9 CHAPTER THREE RESEARCH DESIGN & METHODOLOGY Introduction As previously discussed in chapter one, a notable concern with the assessment literature, and perhaps the assessment movement itself, is the proliferation of unquestioned assumptions about the value and impact of institutional cultures of assessment. As an assessment professional, I observed that I had unquestioningly accepted the belief that assessment cultures accomplish the goals they are purported to support. Ostensibly, a culture of assessment intends to improve student learning and aid in the institutional use of student learning assessment data (Fuller et al., 2016). Where is the evidence demonstrating the efficacy of these practices? Much of the literature discussing student learning assessment and cultures of assessment typically offers suggestions to increase faculty participation in assessment activities and “buy-in” to the concept of a culture of assessment. Pointing to structures such as Weiner’s (2009) detailed list of fifteen elements and Maki’s (2010) more holistic ‘Principles of an Inclusive Commitment”, emphasis is placed on developing systems or processes, overshadowing inquiry into improvement efforts (Eubanks & Fulcher, 2021). While most institutions likely make concerted efforts to improve their students’ learning experiences and success rates, following “best practices” does not automatically result in meaningful change: The strong body of methodological guidance has overshadowed the deeper, philosophical reasons assessment is done. The advancement of methods into common assessment practice has outpaced the exploration of questions regarding the meaning and value of assessment, leaving assessment practitioners with much guidance on how to do assessment and little guidance on why assessment is done. (Fuller, 2011, p. 2) 5 0 Expanding on Fuller’s (2011) concerns about the potential imbalance between assessment methodology and its meaningful application, inquiry into cultures of assessment has been criticized as lacking robust study or clear evidence of an effect on institutional improvement (Eubanks & Fulcher, 2021; Fuller et al., 2016; Kezar, 2013; Welsh & Metcalf, 2003). After exploring these and other critiques of how cultures of assessment have been studied, it became clear that a potential gap existed in the literature. Specifically, I wondered whether studies examining the impact of cultures of assessment on student learning outcomes existed at all. Although researchers have acknowledged that case studies and other types of qualitative research are perceived as presenting ungeneralizable findings (Booth, 2017; Butler et al., 2016; Kezar, 2013; Sandelowski et al., 1997; Zimmer, 2006), a similar argument can be made about the use of purely quantitative studies (Newhart, 2015). As Sandelowski et al. (1997) remarked, in many perspectives, “generalization is narrowly conceived in terms of statistical significance” (p. 367) when qualitative research does, in fact, supply generalizations through the process of coding, categorization, and thematic analysis. Relying on statistical analysis alone does not offer more profound, nuanced insights into the interactions and experiences in teaching and learning contexts (Creswell & Poth, 2018; Newhart, 2015; Zimmer, 2006). Further, as Fuller et al. (2016) have pointed out, empirical studies of cultures of assessment do exist. Still, they tend to focus on what these cultures are and how they can be created, relying on statistical data regarding participation, satisfaction, or other concepts unrelated to student learning outcomes. Acknowledging that qualitative studies may provide a missing piece to the assessment efficacy puzzle offers the possibility that more information exists below the surface of existing studies erroneously considered ungeneralizable 51 (Booth, 2017; Zimmer, 2006). From this point of inquiry, the purpose of this qualitative meta- synthesis was to identify qualitative and mixed-methods research studies that could provide insight into the following questions: 1. What evidence suggests that a culture of assessment results in improved student learning outcomes in higher education? 2. What evidence suggests that ongoing faculty professional development emphasizing pedagogical change and assessment capacity-building results in improved student learning outcomes in higher education? The primary focus of this chapter is to provide a cohesive perspective of the study (Creswell & Poth, 2018) by offering an overview of the study’s design and points of consideration. Before discussing the study’s design, I acknowledge my positionality as a researcher because my personal and professional lenses influenced the interpretive nature of the study. I then describe the qualitative meta-synthesis approach, the data collection process, data analysis strategies, ethical considerations, and methods of achieving trustworthiness and credibility. Researcher Positionality & Interpretive Framework Acknowledging the role of the researcher in this study is critical, as engaging in qualitative inquiry and interpretation naturally requires researchers to interrogate and reflect on their paradigmatic views, particularly those that have the potential to influence the research context and findings (Savin-Baden & Major, 2013). 5 2 Positionality As an educator with a nearly twenty-year career of teaching in secondary and post- secondary settings and recent administrative roles in higher education, I have made expanding my knowledge of effective teaching practices a priority. This persistent desire to improve and understand how teaching and learning affect people led me to pursue a doctorate in education, with a research focus on assessing student learning in the contexts of faculty development and increasing equity for students (Montenegro & Jankowski, 2020). My personal and educational experiences have impacted my view of the importance of education. I have witnessed and experienced both the transformative power of education (Paul & Quiggin, 2020) and education’s potentially harmful, if unintentional, effects when there is a disconnect between educational practices and students’ experiences. Through my work as an educator and my experiences as a student and parent, I have developed a belief that our education systems should emphasize evidence-based practices, critically examining how we teach and assess students to ensure inclusive access to the transformational possibilities education can provide. Serving previously as a faculty member in the English department and currently as director of the teaching and learning center at a rural two-year college, I approach my work from the perspective that higher education can create opportunities for personal, social, and economic equity in students’ lives. From that view, instruction and assessment practices significantly impact students’ futures. My guiding principle as a faculty development professional is that engaging in scholarly inquiry is a clear path to improved teaching and learning and personal and professional growth for faculty. However, my goal as an educator and researcher is not to impose my viewpoint on my colleagues. Instead, I seek to encourage collective ways of knowing and 5 3 challenge commonly accepted yet potentially problematic practices to support the creation of effective, inclusive teaching and learning environments. Interpretive Framework As a researcher, I needed to interpret the results of my research recognizing that my roles of student, parent, educator, and administrator might influence my objectivity. My paradigmatic views have grown with and from personal and professional experiences. Providing inclusive access to education’s transformative possibilities through evidence-based teaching and assessment models is a foundational tenet in this belief system. The interpretive lens of social constructionism with a post-modernist perspective guided my research purpose and overarching professional philosophy. I believe that people learn by constructing meaning from individual and collective interactions, often resulting in multiple ways of knowing. The contextual, constructed nature of learning suggests the need to critically examine systems of control, power, and compliance that may attempt to reduce the unique experiences of learners. This positionality reflects the social constructionist curiosity to better understand others’ views and experiences, with the intention of co-creating meaning and knowledge (Creswell & Poth, 2018, p. 24) and the post-modernist desire to explore questions of the nature of truth and the role of power (Savin-Baden & Major, 2013, p. 27). Learning experiences should lead to individual and collective ways of knowing. This ontological position suggests that learning is contextual and constructed by personal experiences and best served by interpreting information and shared understanding. Epistemologically, I assume that their studies' findings reflect the authors’ experiences and multiple realities. I make the axiological assumption that educators' individual and shared 5 4 values impact how we conceptualize teaching and learning. My synthesis and interpretation involved a co-construction of reality from multiple ways of knowing. Method The overarching goal of this study was to understand the potential efficacy of cultures of assessment concerning improved student learning outcomes. The intent to explore and understand a problem for which there are no adequate existing theories is one of the primary intellectual and conceptual reasons to conduct qualitative inquiry (Creswell & Poth, 2018; Maxwell, 2013), therefore making the selected approach and study design compatible. Qualitative Meta-Synthesis Through the methodological lens of qualitative meta-synthesis, I identified, reviewed, and interpreted existing post-2013 studies focused on cultures of assessment in undergraduate higher education in the United States. The emergent design of the study included identifying existing research where faculty development emphasizing improved instructional and assessment practices could be assumed as a component of a culture of assessment. Because the impact of cultures of assessment on student learning outcomes is not easily quantifiable and is highly contextualized, if studied at all, conducting a meta-synthesis of existing qualitative and mixed- methods studies was a suitable approach to making sense of this phenomenon. Qualitative meta-synthesis is a thematic study of existing literature or research focused on a specific subject or phenomenon (Thorne, 2008). Finfgeld (2003) noted that meta-synthesis is not a systematic literature review or a summing up or aggregation of concepts. Instead, a qualitative meta-synthesis examines the findings of qualitative research studies to “produce a 5 5 new and integrative interpretation of findings that is more substantive than those resulting from individual investigations” (Finfgeld, 2003, p. 894). The qualitative findings of mixed-methods studies might also be included in a qualitative meta-synthesis. Also called meta-ethnography (Noblit & Hare, 1988), Booth (2017) referred to this methodology as ‘qualitative evidence synthesis’ to further differentiate it from systematic reviews. Noblit (2017) succinctly illustrated fundamental differences between qualitative meta- synthesis and quantitative meta-analysis, which are represented in figure 2. While both methods incorporate aspects of systematic review techniques, qualitative meta-synthesis is concerned with interpreting findings from qualitative research rather than seeking generalization of quantitative results, developing and representing insights rather than seeking causal connections, and seeking to understand the perspectives and experiences expressed in the analyzed studies rather than pursuing information to support predictions of causality (Cooke et al., 2012; Noblit, 2017). Figure 2. Comparison of Meta-Synthesis and Meta-Analysis. 5 6 In a critique of research on assessment cultures, Kezar (2013) noted that few empirical studies sufficiently examined this construct and that much of the existing research was based on case-study methodology. While case studies can provide detailed descriptions and insight into a specific context, this approach is not typically viewed as lending itself to broader generalizations or inferences (Booth, 2017), supporting Kezar’s (2013) critique. However, the process of drawing on multiple, inter-related case studies and other qualitative research studies through meta-synthesis offers an opportunity to identify new conceptual relationships not previously realized, as well as the generation of a new theory or expansion of previously narrowly conceived ideas (Habersang et al., 2019; Sandelowski et al., 1997). As a qualitative research methodology, meta-synthesis (or qualitative evidence synthesis) might be conceptualized as a structured and emergent design hybrid. Although meta-synthesis is not a systematic review, identifying research questions, designing a search strategy, analyzing the literature, and synthesizing the results appear systematic in nature (Booth, 2017; Butler et al., 2016). Though the phases offer a sequence for the project’s design, engaging in each stage remains emergent, as “what the qualitative researcher does determines the design” (Sandelowski & Barroso, 2003, p. 796). This study's exploratory and interpretive nature naturally lent itself to the concept of ‘bricolage’ as conceived by Claude Levi-Strauss (1968), which offers a flexible perspective for the development of an emergent research design (Maxwell, 2013). In the spirit of bricolage, this study was concerned with discovering what lies below the surface of existing studies’ findings through continually evolving practice, deconstructing, and reconstructing findings and ideas into something new. The research design acknowledged suggested norms of the methodology while 5 7 also relying on using the tools available and adapting to situational challenges, much like the spontaneous bricoleur described by Levi-Strauss (1968; Maxwell, 2013). Data Collection Process The study focused on identifying qualitative research published since 2013, emphasizing research on assessment cultures and their perceived impact on improved student learning outcomes. A concurrent line of inquiry sought to identify studies examining the effect of targeted faculty development on improved student learning outcomes. The impetus for selecting this line of inquiry initiated from Kezar’s (2013) critique of the dearth of empirical research studying cultures of assessment. While Kezar (2013) is not the only scholar to problematize the research on cultures of assessment, other scholars have cited her work in an attempt to address her critiques, e.g., Guetterman & Mitchell (2016). In 2011, Fuller presented a conceptual framework for a survey of assessment culture and began publishing findings in 2013. Since then, the survey has been administered to multiple stakeholder groups to study connections between assessment cultures, student learning, and institutional effectiveness (Fuller et al., 2016). This work, along with the continued emphasis on faculty-driven assessment models for regional accreditation (Eubanks & Fulcher, 2021), suggested that a greater focus on the impacts of assessment cultures may be evident in post-2013 empirical research. Conducting a qualitative meta-synthesis requires, at minimum, the establishment of search criteria and a search strategy (Zimmer, 2006). The qualitative systematic study design protocol outlined by Booth (2017) offered an approach that served as a departure point for the emergent design of this study. The remainder of this section and the next are organized based on Booth’s (2017) suggested research design sequence. This section intends to provide the reader 5 8 with an overview of the iterative and exploratory search process conducted for the study. Although some aspects of Booth’s (2017) sequence were excluded or modified to fit the purposes of the study, the overall search strategy protocol and literature selection process remained close to Booth’s (2017) suggestions. Review Questions and Search Terms While some qualitative evidence synthesis projects may establish a set of review questions and a separate set of research questions (Booth, 2017), this study's review questions and research questions were the same. To assist in constructing the search strategy and establishing keywords, I used the SPIDER search strategy tool found in evidence-based practice research (Cooke et al., 2012; Methley et al., 2014). SPIDER has been situated as a strategy to identify qualitative research, with its headings as follows: Sample, Phenomenon of Interest, Design, Evaluation, and Research type. Using the SPIDER framework provided a starting point for the search process and served as a touchpoint throughout the search process when the search’s scope became overwhelming. The review questions and their related keywords in the SPIDER framework are indicated in tables 1 and 2. SPIDER Key and Related Search Terms for RQ1 Cultures of Assessment Heading undergraduate higher education in the United States Sample Related terms: “college”, “university”, “postsecondary” “culture of assessment” AND “improved student learning outcomes” Related terms: “assessment culture”, “student learning outcomes assessment”, “culture of improvement”, “culture of evidence”, Phenomena of “outcome-based education”, “institutional assessment”, “program Interest assessment”, “outcomes assessment”, “institutional effectiveness”, “institutional assessment”; “student learning outcomes improvement”, “learning improvement” Table 1. RQ1 SPIDER Search Strategy Tool. 5 9 Qualitative research studies including questionnaires, survey, interviews, Design focus groups, case studies, observational studies, descriptive research, phenomenological research, grounded theory “view*” OR “experience” OR “perception” OR “perspective*” OR Evaluation “attitude” OR "lived experience" Research Type qualitative OR mixed methods Table 1 continued. SPIDER Key and Related Search Terms for RQ2 Faculty Development Heading undergraduate higher education in the United States Sample Related terms: “college”, “university”, “postsecondary” “Faculty development” AND “improved student learning outcomes” Phenomena of Related terms: “professional development”; “student learning Interest outcomes improvement”, “learning improvement” Qualitative research studies including questionnaires, survey, interviews, Design focus groups, case studies, observational studies, descriptive research, phenomenological research, grounded theory “view*” OR “experience” OR “perception” OR “perspective*” OR Evaluation “attitude” OR "lived experience" Research Type qualitative OR mixed methods Table 2. RQ2 SPIDER Search Strategy Tool. Selection and Sampling After establishing the review questions and search terms, I determined the selected criteria for the identified research literature. The intent of this criteria was two-fold: first, to bound the study by choosing which studies would and would not be included, and second, to reduce selection bias on the researcher's part (Butler et al., 2016). The SPIDER framework informed the selection criteria by defining elements such as sample, phenomena of interest, and research type (Butler et al., 2016). When establishing the selection criteria, one consideration was determining whether to include grey literature in the sample. As a strategy to reduce publication bias (Butler et al., 2016) and broaden the search scope, I opted to include limited 6 0 types of grey literature, specifically conference proceedings and papers in the pre-publication stage. I excluded dissertations and theses not because this research does not hold value but because I wanted to include studies that an assessment practitioner, college administrator, or faculty developer might encounter through academic journals, professional conferences, or professional associations when seeking resources to improve student learning outcomes on their campus. In identifying the selection criteria, I intended to ensure that the number and scope of studies were neither too few nor too many while supporting the opportunity for deep analysis (Finfgeld, 2003; Sandelowski et al., 1997). The selection criteria ensured that the sample was relevant to the specific research questions and representative of the phenomena of interest (Booth, 2016; Zimmer, 2006). The selection criteria and justifications to support each criterion are outlined as follows. 1. The study was published in or after 2013. Kezar (2013) criticized the lack of empirical studies focused on studying assessment cultures. This implies that searching for pre-2013 studies may yield poor results. 2. The study examined evidence showing connections between improved student learning outcomes and a culture of assessment. This specifically included faculty development but may have also included other elements defined by Wiener (2009). These elements have been widely cited in the literature as evidence that a culture of assessment exists. The research questions identified qualitative or mixed-method studies examining the impact of a culture of assessment or its components, specifically relevant faculty development, on student learning. 6 1 3. The study employed a qualitative or mixed-methods approach. Using qualitative data to understand student learning improvement better is significant because change or improvement in student learning may be characterized by observed behaviors, self-reported data, attitudes, or beliefs. Qualitative studies provide the opportunity to ask more profound questions about student learning (Newhart, 2015). Qualitative data from mixed methods studies were included. Studies solely utilizing statistical reporting of results were excluded, as were commentaries, discussions, and studies with purely conceptual or theoretical backgrounds. Studies highlighting the development of frameworks, conceptual models, or programs were excluded if they did not also include findings regarding their impact on student learning outcomes. 4. The study was published in the English language. Due to limited time and resources, studies published in languages other than English were not included. 5. The study was focused on undergraduate higher education in the United States. Education systems outside the United States are unique and potentially dissimilar to the US higher education model. Due to limited time and resources, international studies were not included, as there was not sufficient time to determine equivalence between the US and global higher education systems. Studies examining assessment in graduate programs were excluded, as the self-directed and specific nature of graduate-level work may not support generalizability across disciplines or educational levels. Additionally, graduate retention and success rates do not typically lend themselves to the scrutiny that undergraduate rates do (Braxton & Francis, 2017), so accountability mandates are less 6 2 likely to be directed at these programs. Studies related to PK-12 education were excluded because primary and secondary education in the United States typically encompasses structured and standardized assessment models. Assessment models and evaluation of student performance may not generalize to higher education. 6. The study was from a peer-reviewed source. The goal was to identify empirical research that higher education professionals would typically seek out with a desire to understand better whether cultures of assessment improve student learning. Peer-reviewed academic journals, reports, and limited, relevant “grey” literature will be included. Dissertations were excluded due to the project's time constraints and a desire to identify sources of information likely to be used by inquiring professionals. Whole books were excluded, but sections of books were reviewed when relevant. Data Sources and Search Strategies Once the selection criteria were established, I began conducting exploratory searches to confirm the keywords and gain a basic overview of the general availability of studies related to the search criteria. After performing the exploratory searches, I refined my search strategy by applying a modified version of Bramer et al.’s (2018) scoping search protocol. Both phases of the search process are described in detail in this section. The search strings, limiters, and results of each search are presented in the appendices. In the search descriptions, abbreviations of the SPIDER framework headings (in parenthesis) represent keywords as described in table 3. The required terms for each heading and related terms, when applicable, are indicated. 6 3 SPIDER Required and Related Search Terms for Search Strategies Heading Required term: higher education Sample (S) Related terms: “college”, “university”, “postsecondary” Filters: undergraduate, United States Culture of assessment related terms: “assessment culture”, “student learning outcomes assessment”, “culture of improvement”, “culture of evidence”, “outcome-based education”, “institutional assessment”, “program assessment”, “outcomes assessment”, “institutional effectiveness”, “institutional assessment” Required terms: “culture of Faculty development related terms: assessment” OR “faculty “professional development”, “educator Phenomena of development” AND professional development”, “College Interest (PofI) “improved student learning teachers -- In-service training”, “College outcomes” teachers -- Training of”, “college staff development”, “college faculty professional development”, “educational development”, “Scholarship of teaching and learning”, “SoTL”, “professional continuing education”, “staff development” Improved learning outcomes related terms: “student learning outcomes improvement”, “learning improvement” Qualitative research studies including questionnaires, survey, interviews, Design (D) focus groups, case studies, observational studies, descriptive research, phenomenological research, grounded theory “view*” OR “experience” OR “perception” OR “perspective*” OR Evaluation (E) “attitude” OR "lived experience" Research Type qualitative OR mixed methods (R) Table 3. Required and Related Search Terms. Exploratory Searches. I began the search process in the Montana State University Library online catalog (Primo) to gauge the existing body of literature and confirm the keywords selected in the SPIDER search framework. Using the “search everything” feature, I conducted an initial search using all required keywords from the SPIDER framework for research question 1. The 6 4 search string pattern recommended by Cooke et al. (2012) involves using Boolean operators and keywords from all headings in the framework: (S AND Pof I AND (D OR E) AND R). In this case, the first search string used was: (“higher education” AND “culture of assessment” AND “improved student learning outcomes”) AND (questionnaire* OR survey* OR interview* OR focus group OR case stud* OR observational stud* OR descriptive research OR phenomenolog* OR grounded theory) AND (qualitative OR “mixed method*”). This search produced five results, none of which met the selection criteria. While these results were not surprising given the complex search string used, it was clear that the original SPIDER search string model would not yield enough information for the study. As Cooke et al. (2012) suggested, “In particular, we believe that adapting the logic that originally underpinned the SPIDER tool, so that S and PI are initially combined with “AND” and then, in turn, “AND-ed” with the three methodological terms (namely “D OR E OR R”), might prove a fruitful line of inquiry” (p. 1440). Adapting the search method, (“higher education” AND “culture of assessment” AND “improved student learning outcomes”) AND (questionnaire* OR survey* OR interview* OR focus group OR case stud* OR observational stud* OR descriptive research OR phenomenolog* OR grounded theory OR qualitative OR “mixed method*”) produced eight results, which included the five from the first search. None of these met the selection criteria. Recognizing that the search strategy would need to be adapted further, I performed another exploratory search in the library catalog using “search everything” with only the S & PofI required terms, adding the “peer-reviewed journals” filter, which presented 6424 results. The “higher education” (HE) filter narrowed the results to 843. An initial manual review of these results eliminated citations where the abstract expressly indicated a quantitative approach, 6 5 focused on K-12 education, or dealt with topics unrelated to the research questions. Any relevant results pertaining to faculty development were retained. This review resulted in a preliminary set of 200 articles. Although broad in scope, this more productive search reflected the experience shared by Cooke et al. (2012) when they discovered that their search results failed to include studies that did not explicitly mention the research type and other keywords. As this first search stage was exploratory, I conducted four additional searches using varied combinations of SPIDER heading keywords, related words, and filters. After the exploratory searches in the library catalog were complete, I sorted through the citations and abstracts identified as potentially meeting the selection criteria. These citations and abstracts were uploaded to the Zotero reference manager software (version 5.1), where I assigned relevant tags to the references found during the exploratory Primo searches. At this early stage, I used the tagging feature in Zotero to broadly categorize citations, indicating if they applied to RQ1 or RQ2. I observed that many of the references were international, which required exclusion based on the selection criteria; these references were tagged as “exclude—international.” After categorization, the citations found in the exploratory library catalog searches yielded 70 items for further review and 398 excluded items. I then referred to the Montana State University Library’s education research guide (Hansen, 2022) for assistance with selecting appropriate databases to continue and broaden the search process. The guide listed six relevant databases: ERIC, PsycInfo, Web of Science, Educator’s Reference Complete, ProQuest Central, and Sociological Abstracts. In preparing to 6 6 conduct database searches, I intended to maintain consistency in the search process, adhering as closely as possible to the SPIDER framework and search string construction. I began the exploratory database searches with the ERIC database. Following a process similar to the library catalog searches, I conducted five rounds of exploratory searches in ERIC, resulting in 70 items for further review and 597 excluded items. To round out this first stage of the search process, I repeated the process in Google Scholar. The Google Scholar search yielded 73 results for further review. All citations and abstracts for the exploratory searches were added to Zotero. I created additional tags for the ERIC and Google Scholar search results and modified the tags for the items found in the library catalog searches. The broad categorization comprised the following tags: RQ1—Primo, RQ2—Primo, RQ1—ERIC, RQ2—ERIC, RQ1—GS, RQ2—GS, exclude. After updating the citations in Zotero, I compared the results of all lists generated through the exploratory searches in Primo, ERIC, and Google Scholar, exporting each collection into CSV files and creating one master list. 227 references were identified between the exploratory searches in the library catalog, ERIC, and Google Scholar. In the master list spreadsheet, all results were labeled with the discovery location (e.g., ERIC). I sorted all results alphabetically, looking for items that appeared in more than one search. These duplicate items were labeled with “triangulate.” If an item appeared in the results for multiple search locations, I combined locations in the notes and deleted duplicates. This resulted in 181 articles to serve as a potential start set. To gain a clearer picture of the items excluded during the search process, I exported the lists of all excluded search results from each search location, repeating the same method of 6 7 labeling and sorting. The exploratory searches led to a combined 489 references being included in this list. The references in this list were labeled for clearly not meeting the study’s selection criteria—international, quantitative, a book/dissertation, not relevant, not a study. I also looked for results that had been excluded in more than one search, identifying 461 duplicates. Finally, I looked for inconsistencies between searches, such as a reference being excluded in one area and included in another. For example, 15 citations were marked for further review after they were identified for exclusion in one search and inclusion during another. After resolving the discrepancies for the identified references, two were determined to fit the selection criteria. The remainder were excluded. After this analysis, 170 citations remained for further review. These items were tagged in Zotero using the labels assigned during the initial review. Using Zotero’s advanced search function, I began manually sorting through the identified resources based on their metadata and abstract, determining whether they were suitable for the study. When it was clear from the title, abstract, or other metadata that a resource did not meet the selection criteria, it was tagged with “exclude—reason.” Reasons for exclusion included international settings, a population other than undergraduate students, or generally unrelated to the Phenomena of Interest. To triangulate or supplement the results of the exploratory searches, I used the “cited by” feature in Google Scholar and the Connected Papers (2021) website to determine other sources that had cited Kezar’s (2013) article that served as the foundation for my study. These results were not particularly helpful as many were resources I had discovered in previous searches or were excluded based on the selection criteria. However, it helped confirm some of the information that I had already found. Also, it helped confirm subject terms that I was 6 8 using when filtering results in the library catalog and databases. The “cited by” searches resulted in five new results to add to my collection. Following the confirmatory searches in Google Scholar and Connected Papers (2021) and manually sorting through the resources identified in the exploratory searches, I was ready to attempt a different search strategy. I was somewhat disappointed by the resources identified at this point. The manual review of selected full-text articles from this set and the Zotero keyword searches indicated that few resources fit this study well. While I had followed recommendations for effective search protocols, I was not convinced that I had identified a representative sample of studies related to my research questions nor that I would have enough information to support a meta-synthesis. Scoping Searches. In the spirit of the bricoleur (Levi-Strauss, 1968; Maxwell, 2013), I used the resources and tools available to continue building the emergent study design. Bramer et al.’s (2018) systematic review search protocol and optimization strategy were directed at quantitative systematic reviews, but I modified the protocol to fit my needs. Before undertaking this new search strategy, I participated in two independent consultations with academic librarians to confirm my initial and revised search strategies. Both confirmed that I had developed effective search criteria. The only suggestion I received was to consider including “grey literature” in my selection criteria (which had been addressed in the early stages of the study design). While Bramer et al.’s (2018) recommendations for constructing single-line queries using the unique syntax of each database offered explicit steps, I found that this protocol did not lend itself well to databases outside the medical field. Instead, I merged my previous search strategies using the SPIDER framework with Bramer et al.’s (2018) scoping search protocol. Booth (2017) 69 confirmed that a scoping search is often necessary before formally identifying the relevant literature for the meta-synthesis. I began the RQ1 scoping search (Bramer et al., 2018) in the library catalog with a modified SPIDER framework search (Cooke et al., 2012), using the S and PofI required terms: (“Higher education” AND “culture of assessment” AND “improved student learning outcomes”) which yielded 65,209 results. After applying filters related to the selection criteria (Years: 2013- 2023/Peer-reviewed/Articles/Conference Proceedings/Higher Education/Education & Educational Research/Education/Research/Studies/English), 10,243 results remained. I then modified the search fields from “search everywhere” to “title.” I suspected that many results in this initial scoping search occurred because of the inclusion of “higher education” as a key term. I opted to revise the search process in the library catalog to address the required keywords for SPIDER headings separately before combining them into different search string variations. This involved searching for individual keywords limited by “title” then combined the keywords to search for related titles. I repeated the same process limiting results to “subject.” The filters applied at this stage were the date range (2013 and after), English language, peer-reviewed sources, and conference proceedings. No other filters related to selection criteria were used. The process is summarized in table 4 and the full search sequence is presented in Appendix A. 7 0 Filtered Filtered Title Subject Search strings search search results results "culture of assessment" OR "assessment culture" OR "culture of 54 32 evidence" OR "evidence* culture" “improved student learning outcomes” OR "learning improve*" OR "student improve*" 239 26,535 *in subject search, added “learning outcomes assessment” OR “outcomes assessment” as original search yielded 1 result "higher education" OR college OR undergraduate OR "community college" OR "two-year college" OR "two-year education" OR 325,402 428,753 "four-year college" OR "four-year education" OR university Improved learning outcomes + higher ed (“improved student learning outcomes” OR "learning improve*" OR "student improve*”) AND 18 1 (“higher education" OR college OR undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university) Culture of assessment + higher ed ("culture of assessment" OR "assessment culture" OR "culture of evidence" OR "evidence* culture") AND (“higher education" OR 15 16 college OR undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university) Culture of assessment + improved learning outcomes Title contains ("culture of assessment" OR "assessment culture" OR "culture of evidence" OR "evidence* culture") AND 0 1 (“improved student learning outcomes” OR "learning improve*" OR "student improve*”) All keywords combined (“culture of assessment" OR "assessment culture" OR "culture of evidence" OR "evidence* culture”) OR (“improved student learning outcomes” OR "learning improve*" OR "student improve*”) AND 72 26,551 (“higher education" OR college OR undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university) *using AND for “culture of assessment” and “improved learning outcomes” offered 0 results in title search Table 4. Summarized RQ1 Library Catalog Scoping Search. 7 1 The title and subject scoping searches in the library catalog offered inconsistent results but provided insight into keyword choice and combination modifications for the individual database scoping searches. I also observed several results related to the medical field and not to education or student learning, which was an unexpected finding. This offered me an additional point of consideration for selection in future searches. Next, I conducted scoping searches in the selected databases: ERIC, Educator’s Reference Complete, Web of Science, Sociological Abstracts, PsychInfo, ProQuest Central. These searches initially concentrated on the culture of assessment PofI key terms, as the exploratory searches in ERIC demonstrated that the “student learning improvement” and “higher education” keywords would yield numerous results. I retained the exact PofI search string keywords and combinations for each database, modifying filters and search terms according to the database’s syntax while maintaining as much consistency as possible. When available, scoping searches focused on the title, abstract, subject, and full text. Generally, when a search yielded more than 200 results, I applied additional relevant filters. When the results of these searches were sorted by relevance, items listed after the first 100 typically were not relevant to the research question or selection criteria. Instead, several articles were about corporate culture, the medical field, or education contexts outside undergraduate higher education. After the scoping searches for the culture of assessment PofI keywords, I attempted to conduct combined searches using S and PofI keywords as I had done in the library catalog scoping searches. These combined searches provided inconsistent results and were impacted by the indexing and syntax of each database. When few relevant resources were discovered in these 7 2 searches, I relied on database-specific filters related to the selection criteria combined with the PofI keyword searches. As I concluded the scoping searches in each database, I exported the citations and abstracts to Zotero and organized them into collections based on search type and keywords. I created tags indicating the search string type and database where the citation was found. A table summarizing all tags used at this stage and the number of results for each is available in Appendix A. Following uploading and tagging all identified sources from the culture of assessment PofI scoping searches, I eliminated duplicates and noted any items that included faculty development as a key term or subject term. The intent was to identify articles related to faculty development discovered through the culture of assessment and student learning improvement searches and begin creating a start set for the second research question. After sorting through all the first research question articles, I repeated the same search protocol process for RQ2. Since the only difference between RQ1 and RQ2 was the PofI terms (culture of assessment vs. faculty development), this search process was far less complicated than that for RQ1. For example, I conducted an exploratory search in the library catalog for only the faculty development PofI keywords because the other relevant terms remained unchanged from the RQ1 search. I conducted scoping searches for the RQ1 PofI keywords and combined search strings with other PofI and S keywords using the same databases. As with the RQ1 search results, I uploaded all citations and abstracts from the RQ2 searches to Zotero and categorized them by search string type and database, eliminating duplicates. A table summarizing all tags used for the RQ2 scoping searches and the number of results is available in Appendix B. 7 3 After concluding all scoping searches for both research questions, 4,876 unduplicated articles were categorized in Zotero. Figure 3 presents a summary diagram of the search process. Although the intent was to support a future interpretive analysis rather than a comprehensive or aggregative analysis (Cooke et al., 2012), the inherent limitations of finding relevant qualitative research compelled me to search more exhaustively than initially planned. As previously discussed, inconsistencies in indexing, terminology, and methodological descriptions (Cooke et al., 2012) resulted in a complex search process to discover relevant studies. Figure 3. Summary Diagram of the Search Process. Data Analysis Strategies A high-level analysis of the resulting studies naturally occurred throughout the search process and initial source categorization in Zotero as I developed organizational structures for 7 4 the identified sources. Using the tagging feature in Zotero, I labeled citations with the applicable SPIDER headings. I also took advantage of the automatic tags included in the metadata from uploaded bibliographic information. As I encountered automatic tags that matched or related to this study’s SPIDER keywords or selection criteria, I combined them under my predefined tags. In doing so, I was able to gain a better picture of what types of information resulted from the search process. Formal Identification of Literature Using the advanced search feature in Zotero, I searched all 4876 collected resources to identify potential items to include in the final analysis. Approaching categorization from a deductive (top-down) view, I sorted sources by what they appeared to be and what they did not. To confirm that tags had not been inappropriately applied, I first searched titles and abstracts for any words related to exclusion criteria, applying tags following the identified terms. After identifying and eliminating citations that met the exclusion criteria of pK-12, graduate-level, or international, I began systematically searching titles, abstracts, and automatic tags by applying the PofI keywords and related terms. As I analyzed the results identified in the sorting searches, I tagged them as either “reviewed and excluded” or “further review.” 3,663 citations were initially identified for the “reviewed and excluded” collection. To ensure I did not improperly categorize potentially relevant sources, I conducted identical searches as described above in the collection of citations previously marked for exclusion. No new results were presented. Abstracts of sources that matched the selection criteria were then examined for explicit references to learning, student learning, or learning outcomes. If abstracts did not contain any of 7 5 these terms, they were excluded for not meeting the study's goal, e.g., identifying evidence related to faculty development or cultures of assessment and their impact on student learning outcomes. The total number of excluded sources at this point was 4678. This left 148 items for the initial start set collection. After concluding the categorization process, I obtained full-text files for the 148 resources and continued to the next step of the review process, the initial assessment of the identified studies. Initial Assessment of Studies Using Zotero’s advanced search function, I searched the content of all full-text files for instances of the selection criteria keywords, beginning with the individual SPIDER keywords and then building complex searches to combine keywords. Following this full-text screening process, 41 articles appeared to meet the combined search criteria. Although 41 articles constituted a significantly smaller sample than the initially identified 4,876 sources, Booth (2016) has pointed out that including too many studies in a qualitative evidence synthesis could impede deep analysis and theme development. For this reason, Booth (2016) indicated that an ideal number of studies should be between six and fourteen. The emergent design of this study did not specify a target number of studies for analysis, so Booth’s (2016) recommendations provided valuable guidance. Similarly, Sandelowski et al. (1997) recommended that a sample of more than ten articles should be evaluated with stricter selection criteria. With Booth’s (2016) and Sandelowski et al.’s (1997) sample size recommendations in mind, I uploaded the 41 full-text articles to NVivo for more in-depth screening. I generated text search queries to determine the presence of critical terms, particularly the phenomena of interest 7 6 for the study, in each full-text article. The first queries for “learning” and “learning AND student*” predictably returned all 41 articles. A more specific query for (“improved learning"~5 OR "learning improvement"~5) searched for instances of the two words within five words of each other. 31 results were returned. A query for all identified synonyms of “culture of assessment” searched the 41 start set articles for (“culture of assessment" OR "assessment culture" OR "culture of evidence" OR "culture of improvement" OR "institutional assessment" OR “institutional effectiveness" OR "program assessment" OR "outcomes assessment”), returning 36 results. A query for (“faculty development"~5 OR "professional development"~5) searched for instances of the two words within five words of each other and resulted in 32 identified articles. I then combined query criteria to conduct full-text searches for the phenomena of interest in response to the two research questions. First, a query for [(“improved learning"~5 OR "learning improvement"~5) AND ("faculty development"~5 OR "professional development"~5)] returned 24 results. A query for [(“culture of assessment" OR "assessment culture" OR "culture of evidence" OR "culture of improvement" OR "institutional assessment" OR "institutional effectiveness" OR "program assessment" OR "outcomes assessment") AND ("improved learning"~5 OR "learning improvement"~5)] returned 26 results. Using the visual comparison tool in NVivo to identify overlapping articles between the two queries indicated that 19 articles potentially included all three phenomena of interest terms or their synonyms. Five articles were identified as only containing the faculty development and learning improvement terms, while seven were identified 7 7 as containing only the culture of assessment and learning improvement terms. In total, 31 articles of the original 41 contained the phenomena of interest terms in the full text. To ensure accuracy of categorization, I manually reviewed the ten articles not included in the PofI queries. Three potentially addressed the phenomena of interest, removing them from the exclusion category. As a result of this manual review, seven articles inadequately addressed the selection criteria. As I prepared to evaluate the studies by applying selection criteria more strictly, I determined that this was an appropriate time to decide whether to evaluate the quality of the studies. Recommended steps and processes in qualitative meta-synthesis design seem to offer similar advice, but the issue of quality appraisal is one where guidance is less consistent. The phrase ‘it depends’ might summarize researchers’ perspectives on the necessity and value of quality appraisal (Butler et al., 2016; Lachal et al., 2017). Booth (2017) includes this step as optional for robust study design, while Sandelowski et al. (1997) take a firmer stance, “In general, studies should not be excluded for reasons of quality, because, as we noted previously, there are wide variations in conceptions of the good, and in quality criteria” (p. 368). Because of the time-bound nature of this study and my confidence in the strength of the selection criteria created during the study design, I opted not to include quality appraisal as part of the analysis. Final Literature Selection Process To apply the selection criteria more effectively, I followed Butler et al.’s (2016) guidance to determine what constitutes ‘data’ before deeper analysis and data extraction. Since the date, language, sample, and source type selection criteria had been previously applied, I created a decision tree (see figure 4) to use the remaining selection criteria to identify studies for analysis. 7 8 I also included the seven previously excluded articles to confirm my initial impressions, so all 41 of the initially identified articles were reviewed with the decision tree (figure 4). Figure 4. Decision Tree for Final Study Selection. First, I confirmed that a qualitative or mixed-methods approach was used in the study. No articles were excluded. Next, I reviewed each study’s full text, focusing mainly on the findings and conclusions, to determine whether any reference was made to student learning in the context of the study’s findings. This criterion excluded twenty-seven articles, including six of the seven items excluded through the manual review of NVivo search query results. Eleven studies appeared to meet this study’s purposes and selection criteria. Three studies were flagged as 7 9 uncertain, as the findings indirectly discussed student learning. I opted to retain all fourteen identified articles for the final analysis, recognizing that I could determine whether the three ‘uncertain’ studies fit during the analysis and coding process. The complete list of studies reviewed during the final literature selection process is provided in Appendix C. Analysis and Coding During the search and selection process to identify the final set of studies for analysis and synthesis, I used Zotero’s tagging feature to support data organization and topical categorization (Bingham & Witkowsky, 2022). In doing so, I was able to both identify studies that met this study’s selection criteria and begin to generate preliminary interpretations of the current state of the literature on cultures of assessment. This deductive categorization process offered flexibility in determining the best method for a more in-depth coding strategy. Through the analysis and coding process, I sought to identify themes and categories leading to a richer understanding of how student learning outcomes may be impacted by cultures of assessment and their components, particularly faculty development (Creswell & Poth, 2018). During the selection, sorting, and coding processes, I engaged in analytic memoing to better understand the data and organize my thoughts (Maxwell, 2013). These memos took the shape of notes, reflections, and annotations while reading and analyzing the studies’ content. While a deductive (top-down) coding model based on a priori assumptions about cultures of assessment might have been applied, I chose to let the data determine the codes and categories. Beginning with open or initial coding allowed concepts to emerge from the data more naturally than imposing a pre-determined schema (Saldaña, 2021). As the intent of the meta- synthesis was to deconstruct the existing research to discover or construct something new 80 (Given, 2008), open or initial coding seemed to be the most appropriate starting point. Concurrently, I engaged in descriptive coding when appropriate, aligning the initial codes with the phenomena of interest from the SPIDER search tool: culture of assessment, faculty development, and learning improvement. Although this descriptive level of coding did not offer surprising or in-depth findings, it illustrated connections between the studies analyzed and the phenomena of interest (Saldaña, 2021). It also helped organize the more than 90 emerging initial codes and prevented the coding process from becoming unwieldy as new information was interpreted and deconstructed. After the first coding stage was complete, I reviewed the analytic memos and text annotations, applying the initial codes. Following this process, I organized the codes into categories and subcodes. Employing axial coding as a second stage process, I collapsed and recoded the 90 initial codes into six representative categories. I engaged in further analytic memoing to reflect on the connections between the codes within and among categories. I then conducted three additional rounds of axial coding, reassembling the previously deconstructed data (Saldaña, 2021) into key themes identified through the evidence of student learning improvement indicated in the analyzed studies. Figure 5 represents the transition from a descriptive code structure to a thematic analysis encompassing emergent key and ancillary themes. 8 1 Figure 5. Thematic Analysis from Axial Coding The key themes generalize conceptualizations of learning improvement deconstructed from the studies’ findings. Articulated as changes to learning conditions, changes through reciprocal capacity-building, and changes in faculty and student mindsets, these concepts will be discussed in further detail in chapter four. While not directly related to the research questions, the two ancillary themes synthesize the facilitators and barriers to student learning improvement explicitly and implicitly discussed in the analyzed studies. The ancillary themes, characteristics of learning-oriented faculty development, and accepting/rejecting audit culture, provide additional context to the key themes and point to potential policy and practice improvement opportunities for higher education institutions. These ancillary themes will be explored further in chapter four. 8 2 Ethical Considerations The primary ethical considerations for this study were avoiding selection bias during the sample selection process and maintaining awareness of my positionality during the analysis and synthesis phases. Although my initial motivation to conduct this inquiry stemmed from my professional role as a faculty developer and assessment practitioner, it was crucial to avoid misrepresenting the original studies and retain the unique features of the studies under analysis. Because meta-synthesis is intended to function as a deconstruction for reconstruction and translation (Thorne et al., 2004), the study's goal was to construct new interpretations of existing findings. This reconstruction process involved co-creation between the synthesized findings and my interpretation of them. Seeking out opportunities to achieve authenticity and trustworthiness were critical components of presenting the data in an ethical and transferable manner. Methods of Achieving Authenticity, Trustworthiness, and Credibility Modeling the study design on evidence-based, published recommendations was a method of demonstrating trustworthiness and credibility. Using the SPIDER search framework (Cooke et al., 2012) to formulate research questions and key terms resulted in search specificity and a demonstrated attempt at establishing confidence in the search process. Establishing search strategies and selection criteria as part of the overall study design, modeled after Booth’s (2017) qualitative systematic review research design protocol, demonstrates an authentic attempt at transparency. Similarly, using Bramer et al.’s (2018) scoping search protocol to seek saturation of identifiable sources offered peace of mind and confirmation that I had done my due diligence in identifying as many studies potentially related to my research questions as possible. 8 3 I also maintained documentation of the search and selection process to promote transferability. I used Zotero’s tagging feature throughout the search process to identify and document the inclusion, exclusion, and sorting process. Additionally, I documented all search terms, search strings, filters, databases, and strategies used during the search process. This ensures that another researcher could replicate my search. Seeking confirmation from others offered another layer of ensuring credibility. Engaging in two independent consultations with academic librarians to confirm my search strategy was a critical method of verifying that I had selected the appropriate search terms, databases, and filters to identify relevant resources. After selecting my initial set of studies for final review and analysis, I asked a knowledgeable co-worker to briefly review the selected studies to confirm my inclusion/exclusion decisions. We only disagreed on one study, where my co-worker thought it fit the selection criteria, and I was unsure. Otherwise, my co-worker’s impressions of the studies’ relevance to my research questions and selection criteria matched my own. Finally, acknowledging and revisiting my positionality as a researcher was critical to the entire study. On one level, because “meta-synthesis is the synthesist’s interpretation of the interpretations of primary data by the original authors of the constituent studies” (Zimmer, 2006, p.312), this interpretation becomes a co-creation of meaning between the original authors and the author of the meta-synthesis. Maintaining awareness of one’s epistemological reflexivity (Savin- Baden & Major, 2013) supports a reflective process that acknowledges the complex nature of integrating oneself into the research without diving into absolute subjectivity. On another level, practicing personal reflexivity was necessary to understand my meaning-making experiences (Savin-Baden & Major, 2013). Awareness of how my experiences 8 4 and beliefs shaped the study design, research process, and data analysis and how the research has created new experiences, knowledge, and understandings has required critical reflection (Savin- Baden & Major, 2013). Perhaps most illustrative of this concept is the concept of liminality. Savin-Baden and Major (2013) describe liminal spaces as a transitional “state of being in two positions […] for which confusion and ambiguity becomes the norm” (p. 76). Previously, I had experienced a sense of liminality as a tenured faculty member responsible for directing campus assessment of student learning. The ambiguity resulting from this position was shaped by the fact that I was asking my colleagues to participate in a process that I had unquestioningly accepted as a best practice. After stepping into a full-time administrative position that included assessment practitioner duties, I began to evaluate the purpose and value of cultures of assessment more critically, questioning my beliefs and assumptions, which led to this study. Recognizing that I had unhesitatingly accepted the practices and assertions of the assessment movement created a sense of liminality between my professional responsibility to champion the establishment of a culture of assessment and my desire to interrogate theoretical and conceptual assumptions. Acknowledging this liminal space through affective and discriminant reflection (Savin-Baden & Major, 2013) has pushed me to question how I feel about findings, then wonder about the accuracy of my feelings and perceptions, remembering how my values and beliefs have influenced both my understandings and judgments. Chapter Three Summary This chapter summarized this study's data collection, analysis, and synthesis process. Systematic and exploratory methods were used to identify existing studies related to the research questions. These methods included employing the SPIDER search framework, developing and 8 5 applying selection criteria, and analyzing the data in the selected studies through initial and axial coding. Maintaining awareness of my positionality as a researcher and engaging in methods of assuring the credibility and trustworthiness of the data were of paramount importance to the overall quality of the findings. Although the final set of studies analyzed constituted a fraction of those identified during the search process, this is typical of systematic reviews and data synthesis projects: “Retrieval of the entire population of relevant studies would be possible only if a hand search of all relevant journals were undertaken, something that ideally should be avoided!” (Cooke et al., 2012, p. 1440). The next chapter presents the findings of the data analysis process. 8 6 CHAPTER FOUR FINDINGS Introduction This chapter provides an overview of the findings discovered through the process of analysis, coding, and thematic framework development in response to the study’s research questions. This chapter does not intend to present the research and findings of the individual fourteen studies selected for analysis. Instead, it is an examination of the emergent themes related to views of student learning improvement through the lens of Sense-Making Theory. As is typical of qualitative meta-synthesis, findings are presented as integrated, interpretive themes. As Sandelowski et al. (1997) have suggested, qualitative meta-synthesis is concerned with “enlarging the interpretive possibilities of findings and constructing larger narratives or general theories” (p. 369). Therefore, findings are not cited according to the studies synthesized because the interpretive thematic structure integrates findings across studies. Representative quotations are cited when they are included to illustrate or enhance the discussion. For this study, learning was defined as a dynamic, complex process involving a combination of interactions and experiences that support meaning-making, culminating in a lasting change in behavior or knowledge. Using this definition to guide the analysis allowed the inclusion of findings that discussed learning from a broader perspective than grades or standardized assessments might supply, particularly faculty perceptions of student learning and student self-assessment of learning improvement. 8 7 In deconstructing and synthesizing the findings of these studies, it was critical to recognize that none of the studies articulated constructed meanings of learning or learning improvement; thus, the onus was on the reader to interpret the authors’ subjectivity concerning these concepts. Similarly, as most of the studies did not explicitly refer to cultures of assessment as a concept or component of the research questions, an assumption was made that the presence of cultures of assessment was implicit in the studies that explicitly evaluated the impacts of student learning assessment models or targeted faculty development programs. For these reasons, and as the two research questions for this study were closely related, the findings in this chapter are presented by theme rather than by direct response to the research questions. The relationship between the findings and this study’s research questions will be addressed in chapter five. Again, the research questions for this study are: 1. What evidence suggests that a culture of assessment results in improved student learning outcomes in higher education? 2. What evidence suggests that ongoing faculty professional development emphasizing pedagogical change and assessment capacity-building results in improved student learning outcomes in higher education? During the analysis of the fourteen identified studies, it became clear that learning improvement was conceptualized in context-specific ways dependent on the nature of the research, the institution, and the people involved. In some cases, evidence of improved student learning outcomes took the shape of student self-evaluation and perception of knowledge after targeted instruction emphasizing their understanding of a concept, including essential learning outcomes. Faculty perception of student learning improvement also served as a measure of 8 8 change. In other cases, improved student learning was assumed based on changes in the educational environment, whether that was changing the classroom organization to be more flexible or the implementation of active learning strategies. Note that interventions, such as the introduction of active learning pedagogies, were typically integrated into courses as part of a structured faculty development program. Three inter-related key themes emerged from the analysis: changes to learning conditions, changes through reciprocal capacity-building, and changes in faculty and student mindsets. Table 5 summarizes each key theme’s constructed meaning, related thematic codes, and the studies related to that theme’s synthesis. Key Theme: Changes to learning conditions Constructed Meaning Thematic Codes Observations of learning improvement due to Improvement via targeted instructional changes in learning conditions including practice instructional practice, learning environment, and faculty perspectives. Improved faculty-student interactions Related Studies Allen et al., 2019 Demeter et al., 2019 Pelletreau et al., 2018 Bickerstaff et al., 2021 Jankowski, 2020 Tinnell et al., 2019 Carter, 2013 Karabulut-Ilgu et al., 2021 Wheat et al., 2018 Cydis et al., 2015 Marrujo-Duck, 2017 Willett et al., 2014 Key Theme: Changes through reciprocal capacity-building Constructed Meaning Thematic Codes Reciprocal learning and creation of meaning between and among faculty and students Improvement via data use through collaborative learning improvements, authentic assessment, and changes in thinking Students and faculty learning together and behavior. Table 5. Summary of Key Themes and Related Codes. 89 Related Studies Allen et al., 2019 Demeter et al., 2019 Pelletreau et al., 2018 Bickerstaff et al., 2021 Jankowski, 2020 Tinnell et al., 2019 Carter, 2013 Karabulut-Ilgu et al., 2021 Willett et al., 2014 Cydis et al., 2015 Marrujo-Duck, 2017 Key Theme: Changes in faculty and student mindsets Constructed Meaning Thematic Codes Faculty development leading to changes in Observed changes in faculty beliefs about beliefs teaching and learning. Changes in student attitudes, self-perception of learning Student perception of learning improvement improvement, and ability to apply learning to other contexts. Application of new knowledge Related Studies Allen et al., 2019 Jankowski, 2020 Tinnell et al., 2019 Bickerstaff et al., 2021 Karabulut-Ilgu et al., 2021 Wheat et al., 2018 Cydis et al., 2015 Marrujo-Duck, 2017 Willett et al., 2014 Demeter et al., 2019 Pelletreau et al., 2018 Table 5 continued. Additionally, two practice and policy-related ancillary themes were recognized during the analysis and coding process: characteristics of learning-oriented faculty development and accepting/rejecting audit culture. These will be described at the end of this chapter. The following sections are organized thematically, first by the key themes that connected directly to student learning and second by the ancillary themes that emerged through a broader deconstruction of what was said and not said in the studies. The intent is to offer an integrated interpretation that represents insights gained during critical analysis, deconstruction, and reconstruction through a synthesis of multiple studies’ findings (Finfgeld, 2003; Noblit, 2017). 9 0 Key Theme 1: Changes to Learning Conditions The theme of changes to learning conditions encompassed observations of learning improvement attributed to changes related to the use of targeted instructional practices and improved faculty-student interactions. These changes were typically a result of targeted faculty development programming. Improvement Through Targeted Instructional Practice Targeted changes in instructional practices contributed to shifts in the learning environment and overall learning conditions. Structured, cohort-based faculty development programming, usually faculty learning communities (FLCs), emphasized the integration of active learning pedagogies and methods of authentic assessment. In doing so, faculty were better able to reflect on how they saw the different interventions changing their perception of student learning. Changes in student learning were observed in connection with changes to the learning environment. Although physical space was not the only context encompassing the learning environment, integration of active learning pedagogies and authentic assessments logically led to alterations in the use of space as instruction shifted from instructor-centered to student-centered models. One example of this space use involved flexible seating arrangements, allowing students to move in and out of small groups. The flexible seating “made them feel relaxed and comfortable, kept them alert and attentive, and created a participatory classroom environment” (Wheat et al., 2018, p. 45). While not all the studies specifically examined the impact of the learning space on students, the use of group work, team building, whole group instruction, and independent learning reasonably implied space use beyond the traditional lecture-based seating 9 1 plan. The participatory environments encouraged collaboration and self-directed learning, two outcomes associated with improved student learning. Drawing conclusions about the impact of instructional practice changes and interventions on student learning is perhaps one of the more subjective goals in this study and in higher education. The findings in the analyzed studies that aligned with the concept of targeted instructional practice highlight the multiple lenses through which we can view student learning. In one study, participation and engagement were improved by instructional interventions, but higher-order skills of applying new information to other contexts did not occur. In another study, a positive impact of the new instructional practice was seen for students in a lower-level course but not in the next course in the sequence. However, in both examples, the authors acknowledged that these findings were affected by the tools used to assess student learning and the overall study design, including the time faculty and students were allotted to integrate and experience the interventions. Conversely, other studies pointed to evidence of improved student learning as a direct result of targeted instructional practices. In these instances, improvements in students’ thinking (e.g., metacognition) and understanding of course content were noted. When faculty intentionally interacted with students to facilitate the development of competencies related to essential learning outcomes or the revised unit or lesson, students were seen as more successful. A common thread among the findings associated with this theme was the use of intentionally revised and targeted active learning instructional practices coupled with positive faculty-student interactions leading to perceived and documented improvements in student learning. 9 2 As faculty developed increased competency in new active learning pedagogies or authentic assessment strategies, a movement toward student-centeredness was evident as faculty could “focus their attention on students and their processes for learning” (Bickerstaff et al. 2021, p. 23). Improvements to lessons and course materials also benefitted students as faculty transitioned away from high-stakes summative assessments to more authentic assessments and collaborative and formative learning opportunities. Improved Faculty-Student Interactions In many cases, including situations where multiple assessment measures were incorporated, faculty perception of student learning improvement appeared to be measured significantly through observations of student behavior during class. These observations were frequently associated with formal and informal interactions between faculty and students, typically due to a shift toward student-centeredness. A paradigmatic change toward more student-centered instructional practices emerged due to the targeted, collaborative faculty development programs identified in the studies. As faculty engaged in cycles of inquiry and action, adjusting their practice to address student needs, the learning environment evolved from being content or instructor-centered to student-centered. In doing so, students were able to become more participatory in both classroom activities and in shaping their learning experience: Coupled with trauma-informed and healing-centered pedagogy and assessment, faculty and staff can partner with students as producers of content, experts of their lived experience, and be active partners in solving the problem of demonstrating complex learning during a pandemic. (Jankowski, 2020, p. 27) While only one of the studies was recent enough to discuss learning in the context of the COVID-19 pandemic (Jankowski, 2020), views of student-centeredness and students as partners 9 3 were not limited to that study. Consistently, instructional changes contributed to positive learning environments and faculty-student interactions. For example, adopting authentic assessments involved moving away from concerns about student cheating, leading to faculty expressing a more favorable and less distrustful perception of student work ethic. The interactions between faculty and students were significant in many cases as the instructors intentionally worked to shift from lecture-based instruction to active learning pedagogies. Students reported feeling more engaged with the course content, noting that they were more interested and involved when they felt like participants, rather than recipients, in the learning experience. Similarly, faculty members “expressed how their implementation of collaborative student learning techniques within their classroom bolstered their interactions and connections with students” (Tinnell et al., 2019, p. 8). One study using multiple methods to assess student learning associated with the intervention noted that student satisfaction with instructional practices positively related to higher scores on assessments of essential learning outcomes like critical thinking. In a study involving multiple instructors and course sections, researchers noted that “regardless of a faculty member’s unique instructional fingerprint, student learning improved from pre- to post-assessment in all classes” (Pelletreau et al., 2018, p. 5), highlighting the relationship between student satisfaction with active learning strategies and improved learning outcomes. As faculty created opportunities to connect with students during class, they were better able to get to know one another, increasing faculty opportunities to iterate instruction and content based on student needs. As students felt more comfortable and connected in the learning environment, they were better able to accept the challenges of new learning modes and dig 9 4 deeper into the course material. As a result, faculty could observe informally and, in some cases through direct assessment, evidence “to support the idea that effective teaching practices can foster the development of stronger critical thinking skills” (Demeter et al., 2019, p. 48), among other essential learning outcomes. The use of formative and real-time feedback during instruction also offered faculty and students opportunities to interact and create a more partnership-oriented, participatory learning environment. Key Theme 2: Changes Through Reciprocal Capacity-Building As faculty participated in professional development programs and applied new skills and ideas to their classes, they engaged in collaborative work with one another and with students. In many cases, faculty development experiences led to the creation of a learning culture where faculty and students learned and grew together, supporting the co-creation of meaning and understanding through collaborative learning improvements or integration of authentic assessment. Improvement Through Data Use When faculty were able to incorporate and view student learning assessment as an organic part of teaching, they could use assessment data to improve instruction and support student learning. The concept of ‘closing the loop’ applied to these cases when faculty iteratively adjusted instruction in response to student learning feedback, whether formal or informal. Formal feedback came from student assessment data on class projects and assignments, while informal feedback was received through small and large group work and individual faculty-student interactions. 9 5 Another outcome of teaching and learning improvement through data use involved faculty supporting colleagues to implement and revise assessment practices. In these experiences, “faculty benefited from a community in which they had opportunities to look at their own students’ data, discuss aggregate student data from multiple classrooms, and respond to one another’s suggestions” (Pelletreau et al., 2018, p. 9). The community-oriented professional development enabled faculty to learn from one another, growing and improving their professional practice. Students and Faculty Learning Together Many of the studies shared the unique characteristic of faculty and students learning concurrently and from one another. As several of the faculty development experiences were sustained throughout a term or academic year, faculty continued to learn new instructional strategies and implement them in their courses. Reciprocal learning occurred in professional development and student learning contexts as study participants were able to practice new skills, obtain feedback from students and peer observers, and iterate revisions to materials and strategies in the peer community. Self-Reflection as a Metacognitive Practice. Self-reflection was a common theme, both for faculty development and classroom instruction. Reflection was used as a metacognitive practice to help students engage with and contextualize their learning experiences, developing an understanding of themselves and learning “more about the competence they have acquired” (Cydis et al., 2015, p. 47). Similarly, reflecting on their professional learning experiences and considering their classroom instructional experiences allowed faculty to continue developing and refining their professional practice. An added function of reflection in faculty development 9 6 programs is offering learning data to the facilitators. Just as faculty could learn from their students’ reflections and respond accordingly, faculty development facilitators could learn more about what aspects of the development program faculty were integrating and what was not working. Integrating student voice and perspectives as an assessment tool was a logical outcome of incorporating reflection in classroom and professional development learning contexts. During instruction, faculty were able to offer immediate feedback, allowing students to engage with the materials and recognize where learning was occurring and where they needed to remediate the understanding they had of the material. At the same time, students were able to offer feedback, either through faculty-developed collection methods or through their response to instruction and class activities. By attuning to student perspectives and feedback, faculty were better able to respond to student needs and make timely adjustments to instruction. Student Success Promoting Sustained Practice. Student success was identified as a motivator to promote sustained improvement practices and continued student engagement. When faculty observed improvements in student learning due to pedagogical change, they recognized the benefits of continuing those strategies and integrating them into their regular teaching practice. Similarly, when students experienced success in learning experiences, they were motivated to remain engaged and persist in the course. For example, faculty and students learned from each other when examining essential learning outcomes and, in some cases, co-creating assessment measures and rubrics. Applying revised assessment tools to the students’ work gave faculty insight into how students conceptualized the learning objectives and course material, creating further instruction 9 7 and support opportunities. At the same time, students began to understand why their learning experiences mattered and how that learning could be applied to other contexts. Engaging students as learning partners benefitted both faculty and students. Further, students in courses outside the scope of the faculty development programs also benefitted, as instructors applied their new knowledge to other classes in their workload. Faculty Learning from Each Other. A benefit of faculty development programs noted in several studies was the opportunity to learn from colleagues. Collaborative learning occurred during the scheduled faculty development meetings and when faculty engaged in peer observation and offered each other feedback. Peer observation was a commonly identified beneficial practice, both for the instructor being observed and the observer. As faculty engaged in conversation, course material redesign, peer observation, and other peer learning experiences, they felt that “their FLC experience shifted their understanding of teaching from an individual effort to an appreciation of collaboration among peers” (Tinnell et al., 2019, p. 9). An additional benefit of the faculty development programs explored in the studies was the opportunity to connect with colleagues outside one’s discipline. In some cases, the learning community model was seen as helping faculty break through the disciplinary silos often seen in higher education. Significantly, some faculty development programs included structured opportunities for faculty to apply their experiences to the scholarly study of student learning, incorporating principles of the Scholarship of Teaching and Learning (SoTL), again contributing to reciprocal capacity-building by sharing new knowledge with others. 9 8 Key Theme 3: Changes in Faculty and Student Mindsets The theme of changes in student mindset most clearly connected to student voice and experience and was perhaps most indicative of observable impacts on students. Shifts in student mindset and self-efficacy were observed when students were introduced to new knowledge concepts or instructional strategies, particularly those that incorporated active learning principles. Improvements in student attitudes, articulated self-perception of learning improvement, and the ability to apply learning to other contexts were described in the analyzed studies. Faculty Development Leading to Changes in Beliefs A shift in faculty members’ beliefs about teaching and learning was a common theme in the analyzed studies. In most studies that focused on targeted faculty development programs, participants “became more aware of students’ learning needs and how to affect active and positive learning experiences among their students” (Wheat et al., 2018, p. 44). Faculty participants identified their professional learning experiences as influential in moving toward a more student-centered instructional model. This was evidenced in the faculty’s integration of active learning pedagogies into their teaching practice and through changes to instructional materials. Through the collaborative learning experiences in the faculty development programs, faculty were able to interrogate their assumptions about teaching and learning and develop new perspectives as they saw the positive outcomes their new instructional strategies created for students. The change in beliefs and evolved practice was also evident in the broader, sustained application of the faculty development program goals. In most faculty development studies, 9 9 faculty continued to use the strategies and tools gained from the programs in their courses, even after the professional development had concluded. Student Perception of Learning Improvement As a result of interventions implemented throughout the analyzed studies, students’ attitudes toward course content and learning, in general, appeared to improve as they developed a stronger connection to the instructor, peers, and subject matter. When provided with structured opportunities to self-evaluate and reflect on their learning, students who may have initially been reluctant to participate in class activities later identified active learning pedagogies as improving their learning experience and involvement with the course material. Faculty and students identified improved student attitudes as contributing to better engagement, deeper understanding, and willingness to work collaboratively with peers. Mindsets changed as students perceived learning was more than listening to a lecture or taking tests. As engagement and participation increased, students were more open to considering their education through a wider lens. For example, when essential learning outcomes were intentionally integrated into courses and explicitly emphasized to students through instruction and activities, students’ mindset toward learning shifted from a narrow view of subject or content-area knowledge to a broader conceptualization of application to other contexts. In a reflection, one student commented, “I have learned how to think and create beyond the scope of the class to achieve an open-ended goal” (Cydis et al., 2015, p. 41). Instructors’ efforts to support student motivation and participation were seen as influential to students’ persistence and general success in the learning experience. Interestingly, in some cases, students’ perception of their ability levels was lower than what assessment data indicated. 10 0 Application of New Knowledge Application of authentic assessment practices guided students to demonstrate their learning in unique and perhaps less traditional ways. One component of authentic assessment, asking students to reflect on their learning, served as an artifact and a learning experience. The reflections were used to document student competence as a result of learning experiences but also functioned as a mechanism to support the development of metacognitive abilities. As faculty perspectives about learning and assessment shifted with their professional development experiences, students had opportunities to demonstrate their understanding in personally relevant ways: “honing in on essential learning outcomes of a course meant that faculty better understood the role of assessment in course design and were more open to allowing students to choose the format of the assignment they wanted to submit” (Jankowski, 2020, p. 17). When students expressed their learning authentically through ‘real-world’ activities, conditions for more meaningful learning experiences were created. Again, students gained awareness of their understanding when structured reflection opportunities were provided, in addition to receiving feedback through other formative and summative assessment methods. Students recognized their growth as learners in the context of the learning experience. They identified broader applications, such as how an essential learning outcome like critical thinking might be applied outside the present educational context. Ancillary Themes In addition to the key themes that emerged from the studies’ analysis, ancillary themes were identified. While not intended to serve as evidence of improved student learning outcomes, these concepts provide additional context to the key themes and point to potential policy and 10 1 practice improvement opportunities for higher education institutions. Table 6 summarizes each ancillary theme’s constructed meaning and related thematic codes and the studies related to that theme’s synthesis. Ancillary Theme: Characteristics of learning-oriented faculty development Constructed Meaning Thematic Codes Faculty development that models and Responsiveness to faculty readiness supports development of a learning-oriented pedagogical paradigm. Collaborative, structured, and focused professional development Related Studies Allen et al., 2019 Jankowski, 2020 Pelletreau et al., 2018 Bickerstaff et al., 2021 Karabulut-Ilgu et al., 2021 Tinnell et al., 2019 Cydis et al., 2015 Marrujo-Duck, 2017 Wheat et al., 2018 Ancillary Theme: Accepting or rejecting audit culture Constructed Meaning Thematic Codes Institutions situating assessment from a Characteristics of audit culture compliance orientation or supporting faculty Establishing a learning-oriented research in authentic assessment of student learning. agenda Related Studies Marrujo-Duck, 2017 Allen et al., 2019 Culver & Phipps, 2018 Pelletreau et al., 2018 Baas et al., 2016 Demeter et al., 2019 Tinnell et al., 2019 Bickerstaff et al., 2021 Jankowski, 2020 Wheat et al., 2018 Carter, 2013 Karabulut-Ilgu et al., 2021 Willett et al., 2014 Table 6. Summary of Ancillary Themes. Ancillary Theme 1: Characteristics of Learning-Oriented Faculty Development The faculty development models described in several studies included learner-oriented components similar to the practices noted in the targeted classroom interventions for active 10 2 learning. In addition to learning new strategies and information from the facilitators, participants engaged in group discussions, peer support, and metacognitive reflection. Additionally, faculty co-created meaning in their learning experiences by offering feedback to facilitators and helping to guide the next steps in the learning process. Using faculty voice to drive learning and as a form of assessment created a sense of ownership over their learning and modeled what a learning-oriented environment might look like. Responsiveness to Faculty Readiness Critically, in several of the studies, faculty identified their need for additional or continued professional development to build capacity in assessment or specific instructional strategies. A disconnect between professional experience and assessment skills was noted in multiple instances. Faculty teaching in higher education are experts in their disciplinary fields but may lack training in effective assessment methods or strategies to apply assessment data to learning improvements. In some cases, faculty did not have the knowledge or capacity for engaging in specific practices, such as analyzing data, applying data to practice, or engaging with research literature to integrate evidence-based practices in their teaching: “Participants shared years of teaching experience and were engaged in SLO assessment that led to improvements in teaching methods, yet they revealed a lack of fluency in how to improve learning and close achievement gaps” (Marrujo-Duck, 2017, p. 30). Relying on intuition and experience instead of evidence-based practices to drive instructional change decisions was viewed as an ineffective practice. To mitigate barriers to participation and implementation of sustained pedagogical changes, professional development facilitators worked to help faculty find a balance between 10 3 authenticity of professional practice and implementation fidelity of evidence-based strategies. This included collaborative discussions regarding the application of student learning data generated during the targeted instructional interventions. Collaborative, Structured, and Focused Professional Development Common characteristics of faculty development programs demonstrating alignment with faculty readiness concerns were those that incorporated collaboration, structure, and focused improvement goals. Providing opportunities for faculty collaboration was a critical factor in the successful faculty development programs described by most of the studies. Faculty valued professional opportunities to collaborate with colleagues during and outside regular meetings, engaging in peer support, peer observation, and mentorship. These collaborative opportunities led to cross- institutional connections, breaking down departmental silos and creating support structures beyond colleagues in the same discipline. Additionally, when faculty were encouraged to contribute to the content of the professional development programs, whether through discussion, sharing successful experiences, or co-developing instructional content, ownership of the experience and reciprocal teaching and learning amongst peers added relevance and value to the program. Structured faculty development models were another common characteristic identified in the analyzed studies. Following a cohort or learning community model created opportunities for collaborative, hands-on, guided experiences that could be directly applied to instructional practice. The learning community models also tended to involve sustained, ongoing meetings or opportunities for connection, which were identified as supportive of improving faculty 10 4 confidence, competence, and ability to integrate new knowledge with instructional practice. Describing the value of the learning community, one faculty member said, “getting to talk and ask questions, hearing that a lot of other people had the same kinds of issues and questions, and then hearing what other faculty were going to do was really good” (Tinnell et al., 2019, p. 11). These structured professional development models also emphasized specific, manageable improvements. Regardless of the program’s emphasis, improvements were typically tied to student learning assessment in some form. Examples included connecting essential learning outcomes to course design, integrating student-oriented active learning pedagogies, creating opportunities for providing real-time formative feedback to students, and shifting from using traditional evaluation tools to authentic assessments. The faculty development programs described in the studies were seen as supportive, low- risk opportunities to develop capacity for instructional changes and to try out new strategies. These supportive environments facilitated evolving beliefs about learning and instruction, with the goals of the professional development programs reflected in participants’ course materials and instructional practices. Ancillary Theme 2: Accepting or Rejecting Audit Culture Audit Culture Characteristics An underlying theme throughout the analyzed studies centered around the dichotomy of the assessment movement, namely the competing interests of assessing student learning for compliance or learning improvement. When institutions emphasized the idea of building a culture of assessment without making student learning central to that culture, learning-oriented research questions may not have been part of the research agenda. 10 5 Compliance Orientation. Conceptualizing audit culture in student learning assessment involves acknowledging the compliance-driven orientation adopted by some institutions and faculty members. Failure to adequately articulate the utility of institutional and programmatic evaluation for faculty, students, and broader institutional contexts, contributed to the audit culture’s compliance-driven mentality. Attributing value or purpose to assessment reporting processes was recursively impacted by faculty participation. In some instances, faculty perception of assessment’s value was colored by previous experiences of assessment failing to yield valuable data. Similarly, some faculty did not perceive that programmatic or institutional assessment reporting processes would lead to meaningful changes in student learning and thus participated minimally. Another perception identified was that participating in assessment processes held no benefit for faculty. Rejection of assessment practices coincided with the belief that learning is complex and impossible to reduce to simple data points. With accountability as the primary concern, the design of the assessment process or tool may have been flawed, or the expected outcomes may have been inconsistent with student ability or skill level. As a result, the faculty saw no practical application of the assessment results, and students derived little benefit from the assessment of their learning. Nonintegrated Assessment. Organizational structure negatively impacted assessment value when the programmatic or institutional assessment process was not an integrated part of the institution’s priorities. Assessment may have been seen as necessary but not contributive to significant institutional or systematic changes. Perceptions that assessment was ‘extra’ work without a clear place in the institution's functions demonstrated confusion about assessment’s 10 6 purpose—for accountability or improvement. Concerns about losing autonomy to perceived standardization expectations may have also prevented some faculty from wanting to build capacity and knowledge of effective assessment practices. Misalignment of Institutional Resources. Failure to allocate adequate resources to support and sustain change implementation was also identified as a gap. A department chair who “pointed out that there are minimal resources at the department level, so decisions about resource allocation were not part of faculty mindsets” (Culver & Phipps, 2018, p. 8) directly called into question this commonly suggested motivator for faculty to participate in program assessment. When professional development resources were misaligned with faculty needs, such as being allocated primarily to conference attendance or one-off workshop training, sustained local opportunities like faculty learning communities were not prioritized. Faculty who participated in learning communities and were encouraged to share knowledge and practices institutionally found the lack of a faculty forum or symposium to share scholarship related to teaching and learning a barrier to full engagement in improvement initiatives. Even successful and sustainable faculty development programs were at risk due to insufficient time to plan, practice, and build skills, both for faculty and students, as exemplified by this finding: “the ALC experience significantly improves students’ learning engagement in terms of Participation, but it has no effects on students’ learning engagement as measured by Meaningful Processing, which may be related to insufficient opportunity to explore and apply in class” (Wheat et al., 2018, p. 46). 10 7 Rejecting Audit Culture A learning-oriented research agenda was evident when faculty development programs were connected to student learning, rejecting the audit culture mentality. Regardless of curriculum designs and assessment practices across institutions, assessment practices were regarded as more beneficial when faculty collaboration and conversation guided the process. Additionally, the integration of multiple assessment measures to determine learning gains resulting from interventions benefitted students by providing relevant learning data to faculty. Valuing Teaching and Learning. Institutions that rejected audit culture supported instructional innovation and authentic assessment practices. Institutions demonstrated that they valued teaching and learning through change effort recognition and support for faculty who took on leadership roles in mentoring and supporting colleagues. Through faculty development programs, learning-oriented institutions also created opportunities for faculty collaboration and discussion of assessment results and methods to understand student learning better. Contextualizing Inquiry. Determining when to assess student learning and how to do so appropriately was identified as a challenge. Studies that tied student learning to faculty development programs and the implementation of instructional interventions indicated that there was value in recognizing the contextual nature of learning and assessment, as “locally developed instruments aligned with faculty development goals proved to be more suitable for assessment of learning gains on a specific campus than a well-respected rubric used without local adaptation” (Willett et al., 2014, p. 21). This observation supported other concerns about assessment instruments, particularly when assessment tools developed separately from the learning activity 10 8 did not match the activity's objectives. On a related note, a lack of common assessment tools made consequential assessment of essential learning outcomes difficult. As part of a contextualized inquiry model, some studies reflected on the benefits of using multiple assessment measures. One theme that addressed compliance issues involved employing different assessment tools to satisfy various stakeholders. Some studies indicated that faculty were uncomfortable with the prospect of being held accountable for student learning when students’ skills impeded their ability to meet the learning objectives in the course: “SLO assessment did identify gaps in student knowledge, including a lack of preparedness for college- level coursework that could not be overcome in one semester” (Marrujo-Duck, 2017, p. 28). Using multiple assessment measures was thought to improve assessment fidelity. Standardized assessments could satisfy external stakeholders, and internally developed assessment tools could offer faculty, staff, and students worthwhile information. Finally, a critical component of a functional assessment system involved including student voice as a measure of learning. Some studies showed that identifying students’ perceptions of their knowledge and what makes a difference in their education was a significant component of overall understanding of student learning improvement. Ultimately, “although using multiple measures brings a complexity to assessment work, it also creates the opportunity to create more sophisticated narratives of student achievement” (Demeter et al., 2019, p. 49). Faculty Development Leading to Organizational Changes. In some cases, faculty development programming led to organizational culture changes, creating a learning orientation. When the sustained application of strategies learned in faculty development was identified, 10 9 continued practice of effective pedagogies seemed to integrate these strategies into the faculty’s regular teaching practices, making student-centered instruction less time and labor-intensive. Chapter Four Summary This chapter presented findings synthesized from the 14 studies identified during the search and selection process explained in chapter three. Results directly related to the studies’ impact on student learning were synthesized and presented as three key themes: changes to learning conditions, changes through co-creation of meaning, and changes in student mindset. Three sub-themes related to the context of the key themes were also shared. Although the ancillary themes did not link directly to the synthesized studies’ results, these concepts offered insight into the key themes. The implications of these findings will be explored further in chapter five. 11 0 CHAPTER FIVE DISCUSSION Introduction This qualitative meta-synthesis aimed to gain an informed perspective on the current state of empirical literature related to the construct of cultures of assessment. Specifically, the intent was to identify and analyze existing research to understand whether and how cultures of assessment impact student learning outcomes in higher education. Chapter one provided the context for the study and summarized the problem and purpose leading to this topic of inquiry. Guided by two closely related research questions, the emergent nature of the study was grounded in the theoretical lens of Sense-Making Theory (Dervin, 1999). Chapter two summarized literature relevant to this study, primarily to explore foundational concepts related to the constructs of cultures of assessment, student learning outcomes assessment, and faculty development’s place in institutional effectiveness paradigms. Chapter three explained the study’s design, including researcher positionality, data collection, and analysis and coding. Ethical considerations and methods of achieving authenticity, trustworthiness, and credibility were also addressed. Chapter four presented a meta-synthesis of the studies selected for inclusion. Three key themes emerged from the analysis of twelve of the fourteen selected studies. Furthermore, two ancillary themes emerged from the analysis of all fourteen studies. These ancillary themes provided additional insight into the key themes but were not specifically relevant to the study’s two research questions. 11 1 In this chapter, I discuss the conclusions about the study’s research questions and the study's limitations. Recommendations for future research and implications for practice will also be addressed. Research Questions The following research questions guided the study: 1. What evidence suggests that a culture of assessment results in improved student learning outcomes in higher education? 2. What evidence suggests that faculty professional development emphasizing pedagogical change and assessment strategies results in improved student learning outcomes in higher education? Overview of the Study As discussed previously, this study was primarily concerned with identifying and analyzing qualitative and mixed-method empirical research focused on articulating the impact of cultures of assessment on improved student learning outcomes. After generating keyword search terms using the SPIDER search strategy tool (Cooke et al., 2012; Methley et al., 2014), I developed a research design sequence modified from Booth’s (2017) recommended process. Through a multi-step search process, I conducted exploratory and scoping searches of the Montana State University Library’s catalog, six online databases, and Google Scholar, using a modified scoping search protocol based on the process articulated by Bramer et al. (2018). The search process yielded 4,876 unduplicated citations. The citations were then evaluated for relevance to the research questions using inclusion criteria based on Butler et al.’s (2016) 11 2 guidance to articulate what constitutes ‘data’ for analysis. After applying the selection criteria, fourteen studies formed the final data set. After identifying the final set of studies for analysis, I managed the coding and analysis process in NVivo, a data analysis program designed to organize and analyze qualitative data. More than ninety initial codes were generated upon initial analysis conducted through open coding. Through four rounds of axial coding and ongoing analytic memoing, the data were deconstructed and reconstructed, collapsing the initial codes into six descriptive categories that led to the final thematic framework of three key themes and two ancillary themes. The themes were: Key Theme 1: Changes to learning conditions Key Theme 2: Changes through reciprocal capacity-building Key Theme 3: Changes in faculty and student mindsets Ancillary Theme 1: Characteristics of learning-oriented faculty development Ancillary Theme 2: Accepting/rejecting audit culture In chapter four, I presented a meta-synthesis of the themes that emerged from the analysis process. To ensure fidelity of methodology, the meta-synthesis offers an integrated interpretation that represents insights gained during critical analysis, deconstruction, and reconstruction through a synthesis of multiple studies’ findings (Finfgeld, 2003; Noblit, 2017). Therefore, the findings were presented thematically and holistically. Conclusions and Related Literature While important themes emerged from the analysis, particularly concerning the value of collaborative, cohort-based, professional development incorporating elements of faculty learning 11 3 communities (FLCs), no conclusive information was identified in response to the research questions. Conclusions Related to RQ1 What evidence suggests that a culture of assessment results in improved student learning outcomes in higher education? As I attempted to use Sense-Making Theory to “grapple with the meaning of the evidence and its implications for action” (Honig and Coburn 2008, p. 592 in Jonson et al., 2017, p. 39), I concluded that the empirical evidence connecting improved student learning outcomes to cultures of assessment is scant. Just fourteen of 4,876 (<1%) studies met the criteria for inclusion, and most of those studies were related to faculty development programs. Consistent with this finding, it appears that much of the literature on the assessment movement in higher education in the United States is centered on the belief that establishing a culture of assessment is an essential foundation for determining the extent of student learning (Fuller, 2011; Kinzie & Jankowski, 2015; Suskie, 2018). Literature describing the components of a culture of assessment and how to achieve faculty ‘buy-in’ is more common than empirical evidence supporting these assertions. Fuller et al. (2016) provide a succinct summary of this critique related to Weiner’s (2009) traits of a culture of assessment: These definitions have not taken into account that although such traits might be in place at an institution, practices in a culture of assessment might augment the reasons these traits are present. Stated differently, one institution can be engaging in all 15 of the traits outlined by Weiner (2009) with the primary intent of improving student learning, while another institution also could employ these same traits but might be doing so primarily to satisfy accreditation mandates. Although both institutions would meet Weiner’s definition of a culture of assessment, the influences on student learning improvement processes could be drastically different. (p. 398) 11 4 Certainly, precision in terminology is necessary, but perhaps the assessment community is asking the wrong questions. Those publishing about assessment rarely link a specific intervention, instructional practice, or faculty development experience with student learning. Fuller et al. (2016) criticized the extant academic literature as consisting of “conjectural accounts of hypothetical practices for developing a culture of assessment rather than studies that identify or confirm factors that represent cultures of assessment” (p. 396). Accordingly, the results of this study suggest a broader critique: rather than seeking to identify factors that represent cultures of assessment, perhaps more emphasis should be placed on advocating for a learning orientation in higher education and supporting institutions and faculty in the establishment of that goal. Conclusions Related to RQ2 What evidence suggests that faculty professional development emphasizing pedagogical change and assessment strategies results in improved student learning outcomes in higher education? A frequent criticism of the assessment movement is the lack of empirical data indicating improved student learning (Eubanks & Fulcher, 2021; Fuller et al., 2016; Kezar, 2013; Welsh & Metcalf, 2003). The results of this study appear to confirm this critique. While improvements to student learning were addressed in some studies, primarily as a component of a structured faculty development experience or a targeted instructional intervention, the evidence supporting these improvements was inconsistent. This is not to say that faculty and student perceptions of learning improvement are not relevant, nor that the use of reflection as a metacognitive practice is not valid. However, few studies included information about student learning pre-and post- intervention, so the assertion that learning was ‘improved’ lacks evidence. 11 5 In some cases, the researchers noted a disconnect between instructional improvement and perceived student learning improvement, e.g., where faculty perceived improvement in their teaching but did not relate those improvements to changes in student learning. In other cases, it was assumed that exposure to instructional improvement, curricular changes, or other related interventions led to improved learning, but evidence of change or progress was not shared. As noted in some of the studies, institutions continue to perpetuate ineffective assessment practices, such as reducing student learning to quantitative data points without supplying broader context. Despite available evidence-based research emphasizing authentic assessment of student learning, these ineffective practices likely stem from systemic issues at the institutional level. For example, the erroneous assumption that participating in assessment training (e.g., how to conduct institutional reporting) increases faculty buy-in, therefore creating positive perceptions of assessment (Culver & Phipps, 2018) should be re-examined. If faculty are wary of what they are being asked to ‘buy into,’ the apprehension and skepticism underlying the resistance to the assessment movement must be addressed first (Bowker, 2016; Suskie, 2018). It is critical to offer faculty the assurance that studying the impact of pedagogical innovation on student learning will not result in punitive measures. Further, this study’s findings support the assertion that faculty may lack the skill set necessary to collect, analyze, and apply student learning assessment data (Banta & Palomba, 2015; Bresciani Ludvik, 2019; Coates, 2015; Ewell, 2009; Hutchings et al., 2015). Several analyzed studies included faculty comments related to needing additional training and support to engage in effective teaching and learning practices. That desire for support is consistent with the literature as well as college and university faculty possessing disciplinary expertise but lacking 11 6 formal training in curriculum and instruction (Banta & Palomba, 2015; Boyer, 1990; Burns, 2017; Burnstad & Hoss, 2010; Fink, 2013; Smith, 2001). This study’s results imply that investing more institutional resources, particularly time, into ongoing, collaborative faculty learning community models could ameliorate assessment concerns at the course level and potentially at the program level. These findings are supported by the literature which advocates for professional development practices, builds instructor capacity to employ student-centered instructional strategies, and situates teaching in an inquiry-based perspective (Dickson & Treml, 2013; Fink, 2013; Grannan & Calkins, 2018; Hutchings et al., 2013; Wehlburg, 2010). General Conclusions While higher education is not positioned to move beyond or subvert accountability mandates (Bresciani Ludvik, 2019), leaders can guide efforts to create conditions that help faculty integrate authentic and meaningful assessment practices into their work. It is critical to provide the time and resources to help faculty build the capacity to assess student learning meaningfully. This includes ensuring that faculty development includes components of learning community models and is scaffolded for faculty with varied instructional experience and career levels. When higher education leaders, including assessment coordinators, do not have experience in teaching or curriculum development, it can be challenging for them to accurately and adequately support faculty who need support in these areas. Recognizing the strengths and weaknesses of institutional leaders and providing access to resources to remediate gaps in knowledge of teaching and learning (Bok, 2013; Bresciani Ludvik, 2019) is another critical step to establishing a learning-oriented culture. 11 7 The analysis also points to a necessary paradigm shift when considering avenues to establish an institutional culture that supports the assessment of student learning. Rather than attempting to define cultures of assessment, perhaps higher education needs to focus on identifying a shared understanding of what ‘learning’ or ‘learning improvement’ means, particularly concerning student learning outcomes assessment. It is possible that different definitions can exist for multiple stakeholder purposes (Shavelson, 2007). It may be more beneficial to determine what ‘learning’ means in terms of institutional effectiveness and accreditation purposes versus what ‘learning’ looks like from a course level or programmatic standpoint. In my experience as an instructor and assessment professional, one of the many barriers to assessing student learning is the absence of clear parameters delineating the metrics and criteria expected by regional accreditors. While flexibility in accreditation processes is valuable, a lack of clear guidance seems to lead to ineffective assessment practices or compliance-driven models that emphasize effective reporting more than effective teaching and learning (Eubanks & Fulcher, 2021; Provezis, 2010). As Fuller et al. (2016) articulated, how institutions reach compliance with assessment appears to vary widely, and there is no guarantee that typical assessment practices focus on a shared definition of student learning. The selection process for this study excluded a significant amount of literature because there was no indication that the impact on student learning was addressed in the findings. Unless higher education and its stakeholders change their understanding of what constitutes student learning and normalize authentic, honest appraisal of student learning assessment, the literature will continue to revolve around defining cultures of assessment and attempting to understand 11 8 faculty perceptions of assessment. While studies outside the dataset may make the connections this study sought, the studies selected for the analysis and the literature discussed in chapter two indicate that empirical research centered on student learning outcomes improvement remains a significant gap. New Conceptual Framework Using Sense-Making Theory to “grapple with the meaning of the evidence and its implications for action” (Honig and Coburn 2008, p. 592 in Jonson et al., 2017, p. 39), the findings that emerged from the data analysis point to the need for a paradigm shift from promoting a culture of assessment to emphasizing a culture of learning. While the original conceptual framework presented in chapter one reflected a desire to understand cultures of assessment, figure 6 represents a new conceptual model proposing conditions leading to a culture of learning. 11 9 Figure 6. Conceptual Model of a Culture of Learning. In this new model, a culture of learning emphasizes four fundamental principles that exemplify the reciprocal capacity-building identified in the findings. First, there is creating conditions for faculty and students to participate in meaningful learning experiences (Angelo & Cross, 1993; Fink, 2013), including building capacity through exposure to new ideas and active learning experiences (Grannan & Calkins, 2018). Emphasizing student-centeredness can lead to supportive and authentic teaching and learning environments. In these environments, individual inquiry can develop and facilitate student learning and a scholarly approach to teaching and learning that can help create changes 12 0 necessary for improved learning outcomes (Dickson & Treml, 2013; Fink, 2013; Hutchings et al., 2013; Wehlburg, 2010). Prioritizing active learning principles in faculty development programs and the classroom is another critical component of this framework. As noted previously, faculty who experienced social and active learning through FLCs appeared to be well-equipped to use active learning pedagogies in the classroom (Ouellett, 2010). Integrating metacognitive practices like learning reflections can help faculty engage with professional development content and support implementing these practices in the classroom (Dickson & Treml, 2013; Fink, 2013; Hutchings et al., 2013; Wehlburg, 2010). Further, self-reflection may lead to a sense of ownership over one’s learning, increasing engagement. When faculty felt ownership in the FLCs, they sustained their new practices after the training had ended. Similarly, integrating student voice as a measure of student learning assessment and embracing students as partners can help faculty and students learn from one another and give students a sense of ownership. None of this is possible without a foundation of institutional support. One of the most critical resources is a commitment to providing time for faculty to engage in necessary professional development and time to plan and implement pedagogical innovations (Banta & Blaich, 2011; Grannan & Calkins, 2018; Walvoord, 2010). Emphasizing professional learning through social interactions would enable faculty to learn from one another and create support systems. Professional development extends to administrators as well. Campus leaders need to have the necessary skills to support teaching and learning and may have skill gaps similar to those expressed by faculty (Bok, 2013; Bresciani Ludvik, 2019). Prioritizing ongoing capacity- 12 1 building in teaching and learning for staff at all levels would demonstrate an institutional commitment to learning improvement. Limitations Beyond the limitations identified in chapter one, Prince (2004) outlined several limitations that come from asking the question “What works?” in relation to active learning pedagogies that also align well with the research questions this study sought to answer. Specifically, data related to the assessment of multiple learning outcomes can provide inconclusive or contradictory results. Further, many of the higher-order essential learning outcomes we assess do not lend themselves to measurement, although organizations have attempted to quantify complex student learning. Because of this, finding relevant empirical studies connecting assessment of essential learning outcomes to improved student learning is challenging. In addition to the difficulty of identifying practice-based research tying assessment to student learning improvement is the challenge of interpreting the limited results that are available. This limitation is particularly noticeable as conducting quasi-experimental research on populations of students is problematic on multiple levels, one of which is tracking students through their educational careers. Another limitation is that with one exception (Jankowski, 2020), the analyzed studies were conducted in face-to-face, in-person course environments. Aside from the voices of faculty and students who were forced to adopt distance learning in the spring of 2020 (Jankowski, 2020), the needs and experiences of those teaching and learning online were not addressed in the synthesized studies and are therefore not part of this study’s findings. As online programs 12 2 proliferate, it will be important to conduct assessment research that includes this learning modality. A third limitation is related to the search process. Cooke et al. (2012) described the challenge of accurately identifying relevant qualitative research. The search and selection process for this study relied heavily on abstracts and, to some extent, the subject terms assigned to the literature. Because “effective retrieval terms rely on clarity in the title and abstract, although assignment of indexing terms depends on the indexer’s interpretation of the full article” (Cooke et al., 2012, p. 1436) the success of a search is dependent on the indexed metadata. Idiosyncrasies in database indexing and inconsistencies in metadata can exclude relevant information from searches. Therefore, it is possible that suitable literature was excluded from this study. Recommendations for Future Research A logical next step in future research is to examine the concepts and perspectives contained in the many studies excluded from this study’s analysis. More than half of the excluded articles were conducted outside the United States, so a comprehensive or comparative analysis of student learning assessment research from international perspectives could provide essential points of comparison to this study’s findings. This research should consider differences in educational systems and could specifically include countries that participated in the Bologna Process (European Higher Education Association, n.d.). The Bologna Process was established in the European Union to support transfer and commonality in standards and quality assurance across Europe. Countries participating in this initiative would likely have similarities to higher ed in the United States. 12 3 A significant number of excluded articles were focused on PreK-12 teachers and student populations. While there are certainly differences between higher education and PreK-12 education, higher education might learn from faculty development models present in PreK-12 schools, particularly the contracted time allotted for professional development throughout the school year. Several articles written by librarians were identified and excluded during the search process. Exploring the role academic libraries play in supporting institutional effectiveness may be instructive, as libraries tend to practice data-driven decision-making. All but one of the studies that made up the dataset took place in traditional, face-to-face settings. Online faculty and student experiences were not well-represented, so future research should consider the online experience. This might include observations of faculty development integration with online courses and pre-and post-intervention student learning results. Finally, examining research questions similar to this study’s with the intent of disaggregating data by institution type may reveal a deeper level of understanding. For example, faculty development and institutional effectiveness practices at small community colleges likely differ from large research institutions in scope and practice. Identifying what works for different institution types could offer guidance for peer institutions, as well as demonstrate areas of commonality across higher education. Implications for Practice Although the findings did not supply clear-cut evidence of student learning improvement related to cultures of assessment, there are meaningful implications for practice. 12 4 The first recommendation is to normalize action research and faculty engagement in the Scholarship of Teaching and Learning (SoTL). Institutional support for SoTL could be a step toward addressing the discomfort faculty may experience around studying the impacts of their instructional practices on student learning. The Scholarship of Teaching and Learning should be given the same credence as disciplinary scholarship. Institutional leaders should establish support structures to help faculty participate in this equally important research and forums to share their findings. The second recommendation is to evolve the assessment research paradigms. Those who study assessment and institutional effectiveness can shift the prevailing narratives by encouraging empirical research on the effects of assessment practice on student learning. While this might prove difficult given the current narratives regarding creating cultures of assessment, establishing faculty ‘buy-in,’ and understanding faculty perspectives regarding assessment, normalizing participation in SoTL and publication of those studies’ results could be a step toward better information. Significantly, incorporating student voice and student perceptions into all levels of assessment may help higher education transcend audit culture. As Jankowski (2020) emphasized, the COVID-19 pandemic in the spring of 2020 revealed many inadequacies in serving students, including poor planning and ineffective instructional practices that might have otherwise been ignored. A perspective expressed in Jankowski’s (2020) study sums up the potential for this paradigmatic shift: I am not sure we could have shifted to it without the COVID crisis, but we have the opportunity now to have a unified, consistent, evidence-based “student first” culture that strongly encourages faculty to use more effective teaching and really know who our students are and what they need. (Jankowski, 2020, p. 18) 12 5 Higher education has an opportunity to make students partners in, instead of consumers of, their learning experiences. Recognizing and remediating problematic practices by learning who our students are and what they need is a move toward a culture of learning. Concluding Thoughts The selection process for this study excluded a significant amount of literature because there was no indication that the impact on student learning was addressed in the findings. While studies outside the dataset may make this connection, the overall results of this study indicate that empirical research centered on learning improvement remains a significant gap in the literature. Higher education stakeholders need to critically examine the accountability and institutional effectiveness systems that have been created over the last 30 years. As Kezar (2013) expressed, “We need studies that penetrate the veneer, allow understanding to surface, and unearth key insights” (p. 202). Unless authentic, honest appraisal of student learning assessment is normalized, the drive to assess learning outcomes will fail to supply relevant data, and the literature will continue to seek answers to the wrong questions, revolving around defining assessment cultures and attempting to understand faculty perceptions to increase buy-in. We owe it to our students to do better. 12 6 REFERENCES CITED 12 7 Alexander, P. A., Schallert, D. L., & Reynolds, R. E. (2009). What is learning anyway? A topographical perspective considered. Educational Psychologist, 44(3), 176–192. https://doi.org/10.1080/00461520903029006 Allen, T., Queen, S., Gallardo-Williams, M., Parks, L., Auten, A., & Carson, S. (2019). Building a culture of critical and creative thinking. Creating and sustaining higher-order thinking as part of a quality enhancement plan. In J. Domenech, P. Merello, E. DeLaPoza, D. Blazquez, & R. PenaOrtiz (Eds.), 5th international conference on higher education advances (head’19) (WOS:000509960700160; pp. 1391–1398). UNIV POLITECNICA VALENCIA. https://doi.org/10.4995/HEAd19.2019.9536 Andrade, M. S. (2011). Managing change—engaging faculty in assessment opportunities. Innovative Higher Education, 36(4), 217–233. https://doi.org/10.1007/s10755-010-9169- 1 Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). Jossey-Bass. Ausubel, D. P. (2000). The acquisition and retention of knowledge: A cognitive view. Springer. https://doi.org/10.1007/978-94-015-9454-7 Baas, L., Rhoads, J. C., & Thomas, D. B. (2016). Are quests for a “culture of assessment” mired in a “culture war” over assessment? A q-methodological inquiry. SAGE Open, 6(1), 215824401562359. https://doi.org/10.1177/2158244015623591 Banta, T. W., & Blaich, C. (2011). Closing the assessment loop. Change: The Magazine of Higher Learning, 43(1), 22–27. https://doi.org/10.1080/00091383.2011.538642 Banta, T. W., & Palomba, C. A. (2015). Assessment essentials: Planning, implementing, and improving assessment in higher education (2nd ed.). Jossey-Bass. Bélanger, P. (2011). Theories in adult learning and education. Verlag Barbara Budrich. https://www.jstor.org/stable/j.ctvbkjx77 Bickerstaff, S., Raphael, J., Hodara, M., Leasor, L. A., & Riggs, S. (2021). The implementation and outcomes of lesson study in community college mathematics (p. 40). Teachers College, Columbia University. https://ccrc.tc.columbia.edu/publications/lesson-study- implementation-outcomes.html Biggs, J. (1996). Teaching through constructive alignment. Higher Education, 32(3), 347–364. https://doi.org/10.1007/BF00138871 Bingham, A.J., & Witkowsky, P. (2022). Deductive and inductive approaches to qualitative data analysis. In C. Vanover, P. Mihas, & J. Saldaña (Eds.), Analyzing and interpreting qualitative data: After the interview (pp. 133-146). SAGE Publications. 12 8 Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher, R. W. Pew, L. M. Hough, & J. R. Pomerantz (Eds.), Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56–64). Worth Publishers. https://bjorklab.psych.ucla.edu/wp- content/uploads/sites/13/2016/04/EBjork_RBjork_2011.pdf Bjork, R. A., & Bjork, E. L. (1992). A new theory of disuse and an old theory of stimulus fluctuation. In A. Healy, S. Kosslyn, & R. Shiffrin (Eds.), From learning processes to cognitive processes: Essays in honor of William K. Estes (Vol. 2, pp. 35–67). Erlbaum. https://bjorklab.psych.ucla.edu/wp- content/uploads/sites/13/2016/07/RBjork_EBjork_1992.pdf Blaich, C. F., & Wise, K. S. (2011). From gathering to using assessment results: Lessons from the Wabash National Study (Occasional Paper No. 8). https://www.learningoutcomesassessment.org/wp- content/uploads/2019/02/OccasionalPaper8.pdf Bloom, B. (1956). Taxonomy of educational objectives; the classification of educational goals, (1st ed.). Longmans, Green. http://www.uky.edu/~rsand1/china2018/resources.html Bok, D. C. (2013). Higher education in America. Princeton University Press. Bonwell, C. C., & Eison, J. A. (1991). Active learning: Creating excitement in the classroom. ERIC Clearinghouse on Higher Education. https://eric.ed.gov/?id=ED336049 Booth, A. (2016). Searching for qualitative research for inclusion in systematic reviews: A structured methodological review. Systematic Reviews, 5(1), 74. https://doi.org/10.1186/s13643-016-0249-x Booth, A. (2017). Qualitative evidence synthesis. In K. Facey, H. Ploug Hansen, & A. Single (Eds.), Patient involvement in health technology assessment (pp. 187–199). Adis. http://eprints.whiterose.ac.uk/142061/ Boud, D. (1990). Assessment and the promotion of academic values. Studies in Higher Education, 15(1), 101–111. https://doi.org/10.1080/03075079012331377621 Bowker, L. (2016). Language and quality assurance: A case study highlighting the effects of power, resistance, and countertactics in academic program reviews. TTR : Traduction, Terminologie, Rédaction, 29(2), 177–193. https://doi.org/10.7202/1051018ar Boyer, E. L. (1990). Scholarship reconsidered: Priorities of the professoriate. Carnegie Foundation for the Advancement of Teaching. https://eric.ed.gov/?id=ED326149 Bramer, W. M., De Jonge, G. B., Rethlefsen, M. L., Mast, F., & Kleijnen, J. (2018). A systematic approach to searching: An efficient and complete method to develop literature 12 9 searches. Journal of the Medical Library Association, 106(4). https://doi.org/10.5195/jmla.2018.283 Braxton, J. M., & Francis, C. H. W. (2017). Organizational assessment to improve college student persistence. Strategic Enrollment Management Quarterly, 5(2), 80–87. ERIC. Bresciani Ludvik, M. J. (2019). Outcomes-based program review: Closing achievement gaps in and outside the classroom with alignment to predictive analytics and performance metrics (2nd ed.). Stylus Publishing. Brown, S. T., McGreevy, J., & Berigan, N. (2018). Evidence-informed improvement through collaborative professional integration. New Directions for Teaching and Learning, 2018(155), 55–64. https://doi.org/10.1002/tl.20303 Burns, K. A. (2017). Community college faculty as pedagogical innovators: How the Scholarship of Teaching and Learning (SoTL) stimulates innovation in the classroom. Community College Journal of Research and Practice, 41(3), 153–167. https://doi.org/10.1080/10668926.2016.1168327 Burnstad, H., & Hoss, C. J. (2010). Faculty development in the context of the community college. In K. J. Gillespie & D. L. Robertson (Eds.), A guide to faculty development (2nd ed., pp. 309–326). Jossey-Bass. Butler, A., Hall, H., & Copnell, B. (2016). A guide to writing a qualitative systematic review protocol to enhance evidence-based practice in nursing and health care: The qualitative systematic review protocol. Worldviews on Evidence-Based Nursing, 13(3), 241–249. https://doi.org/10.1111/wvn.12134 Cardoso, S., João Rosa, M., & Santos, C. S. (2013). Different academics’ characteristics, different perceptions on quality assessment? Quality Assurance in Education, 21(1), 96– 117. https://doi.org/10.1108/09684881311293089 Cardoso, S., Rosa, M. J., & Stensaker, B. (2016). Why is quality in higher education not achieved? The view of academics. Assessment & Evaluation in Higher Education, 41(6), 950–965. https://doi.org/10.1080/02602938.2015.1052775 Carter, T. M. (2013). Use what you have: Authentic assessment of in-class activities. Reference Services Review, 41(1), 49–61. ProQuest Central. https://doi.org/10.1108/00907321311300875 Chick, N. L. (2018). An origin story. The National Teaching & Learning Forum, 27(6), 7-8. https://doi.org/10.1002/ntlf.30172 Chick, N. L. (2019). How to start doing SoTL. The National Teaching & Learning Forum, 28(2), 10-11. https://doi.org/10.1002/ntlf.30189 13 0 Coates, H. (2015). Moving beyond anarchy to build a new field. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. https://www.learningoutcomesassessment.org/wp-content/uploads/2019/08/Viewpoint- Coates.pdf Connected Papers. (2021). https://www.connectedpapers.com/ Cooke, A., Smith, D., & Booth, A. (2012). Beyond PICO: The SPIDER tool for qualitative evidence synthesis. Qualitative Health Research, 22(10), 1435–1443. https://doi.org/10.1177/1049732312452938 Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry & research design: Choosing among five approaches (4th ed.). SAGE. Culver, S., & Phipps, G. (2018). According to faculty, the most important reasons for doing assessment at an hbcu. Journal of Assessment and Institutional Effectiveness, 8(1–2), 1– 21. JSTOR. https://doi.org/10.5325/jasseinsteffe.8.1-2.0001 Cydis, S., Galantino, M. L., Hood, C., Padden, M., & Richard, M. (2015). Integrating and assessing essential learning outcomes: Fostering faculty development and student engagement. Journal of the Scholarship of Teaching and Learning, 15(3), 33–52. ProQuest Central; Publicly Available Content Database. https://doi.org/10.14434/josotl.v15i3.13315 Day, S. (2012). A reflexive lens: Exploring dilemmas of qualitative methodology through the concept of reflexivity. Qualitative Sociology Review, 8(1), 60–85. http://www.qualitativesociologyreview.org/ENG/Volume21/QSR_8_1_Day.pdf Demeter, E., Robinson, C., & Frederick, J. G. (2019). Holistically assessing critical thinking and written communication learning outcomes with direct and indirect measures. Research & Practice in Assessment, 14(1), 41–51. ProQuest Central; ProQuest One Academic. https://www.proquest.com/scholarly-journals/holistically-assessing-critical-thinking- written/docview/2618270918/se-2?accountid=28148 Dervin, B. (1999). Chaos, order, and sense-making: A proposed theory for information design. In R. E. Jacobson (Ed.), Information Design (pp. 35–57). MIT Press. https://www.ideals.illinois.edu/bitstream/handle/2142/2279/dervindraft.htm Dickson, K. L., & Treml, M. M. (2013). Using assessment and SoTL to enhance student learning. New Directions for Teaching and Learning, 2013(136), 7–16. https://doi.org/10.1002/tl.20072 Egan, K. (2003). Start with what the student knows or with what the student can imagine? Phi Delta Kappan, 84(6), 443–445. https://doi.org/10.1177/003172170308400606 Erickson, H. L. (2001). Stirring the head, heart, and soul: Redefining curriculum and instruction (2nd ed.). Corwin Press. 13 1 Eubanks, D. (2021). Assessing for student success. Intersection: A Journal at the Intersection of Assessment And Learning, 2(2), 1–11. https://aalhe.scholasticahq.com/issue/2893 Eubanks, D., & Fulcher, K. (2021). The next ten years: The future of assessment practice? Research & Practice in Assessment, 16(1), 1–6. https://www.rpajournal.com/dev/wp- content/uploads/2021/03/The-Future-of-Assessment-Practice.pdf European Higher Education Area and Bologna Process. (n.d.). http://www.ehea.info/ Ewell, P. T. (1997). Organizing for learning: A new imperative. AAHE Bulletin, 61, 52–55. https://www.aahea.org/articles/ewell.htm Ewell, P. T. (2009). Assessment, accountability, and improvement: Revisiting the tension (Occasional Paper No. 1). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. https://www.learningoutcomeassessment.org/documents/PeterEwell_005.pdf Ewell, P. T., & Cumming, T. (2017). Chapter 1. Introduction: History and Conceptual Basis of Assessment in Higher Education. In T. Cumming & D. Miller (Eds.), Enhancing assessment in higher education: Putting psychometrics to work. Stylus. https://academicworks.cuny.edu/ny_pubs/231/ Excellence in Assessment (EIA) Designation—NILOA. (n.d.). Retrieved June 19, 2020, from https://www.learningoutcomesassessment.org/eia/ Felten, P., Gardner, J. N., Schroeder, C. C., Lambert, L. M., & Barefoot, B. O. (2016). Improvement matters. In The undergraduate experience: Focusing institutions on what matters most (1st ed., pp. 113–134). Jossey-Bass. https://www.elon.edu/u/the- undergraduate-experience/#resources Finfgeld, D. L. (2003). Metasynthesis: The state of the art—so far. Qualitative Health Research, 13(7), 893–904. https://doi.org/10.1177/1049732303253462 Fink, L. D. (2013). Creating significant learning experiences: An integrated approach to designing college courses (Revised and updated edition). Jossey-Bass. Fulcher, K. H., Good, M. R., Coleman, C. M., & Smith, K. L. (2014). A simple model for learning improvement: Weigh pig, feed pig, weigh pig. (Occasional Paper No. 23). University of Illinois and Indiana University. https://www.learningoutcomesassessment.org/wp- content/uploads/2019/02/OccasionalPaper23.pdf Fulcher, K. H., Smith, K. L., Sanchez, E. R., Ames, A. J., & Meixner, C. (2017). Return of the Pig: Standards for Learning Improvement. Research & Practice in Assessment, 11, 10– 40. 13 2 Fuller, M. (2011). The Survey of Assessment Culture conceptual framework. https://www.shsu.edu/research/survey-of-assessment- culture/documents/TheSurveyofAssessmentCultureConceptualFramework.pdf Fuller, M. B., & Skidmore, S. T. (2014). An exploration of factors influencing institutional cultures of assessment. International Journal of Educational Research, 65, 9–21. https://doi.org/10.1016/j.ijer.2014.01.001 Fuller, M. B., Skidmore, S. T., Bustamante, R. M., & Holzweiss, P. C. (2016). Empirically exploring higher education cultures of assessment. The Review of Higher Education, 39(3), 395–429. https://doi.org/10.1353/rhe.2016.0022 Given, L. (2008). Evidence-based practice. In The SAGE Encyclopedia of Qualitative Research Methods (pp. 309–311). SAGE Publications, Inc. https://doi.org/10.4135/9781412963909 Grannan, S., & Calkins, S. (2018). Creating a culture for evidence-based assessment of learning. New Directions for Teaching and Learning, 2018(155), 11–20. ERIC. https://doi.org/10.1002/tl.20298 Guetterman, T. C., & Mitchell, N. (2016). The Role of Leadership and Culture in Creating Meaningful Assessment: A Mixed Methods Case Study. Innovative Higher Education, 41(1), 43–57. https://doi.org/10.1007/s10755-015-9330-y Habersang, S., Küberling‐Jost, J., Reihlen, M., & Seckler, C. (2019). A process perspective on organizational failure: A qualitative meta‐analysis. Journal of Management Studies, 56(1), 19–56. https://doi.org/10.1111/joms.12341 Hansen, M. A. (2022). Education research guide. MSU Library. https://guides.lib.montana.edu/education Hersh, R. H., & Keeling, R. P. (2013). Changing institutional culture to promote assessment of higher learning (Occasional Paper No. 17). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. https://www.learningoutcomesassessment.org/wp- content/uploads/2019/02/OccasionalPaper17.pdf Higher Education Assessment Practitioners. (2018). Why are we assessing? University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomesassessment.org/wp-content/uploads/2019/08/Viewpoint- Why-Are-We-Assessing.pdf Horst, S. J., & Prendergast, C. O. (2020). The Assessment Skills Framework: A taxonomy of assessment knowledge, skills and attitudes. Research & Practice in Assessment, 15(1), 4– 28. https://www.rpajournal.com/dev/wp-content/uploads/2020/08/RPA-Volume-15- Issue-1.pdf 13 3 Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Allyn and Bacon. Hutchings, P. (2000). Approaching the scholarship of teaching and learning. In Hutchings, P (Ed.), Opening lines: Approaches to the scholarship of teaching and learning (pp. 1-10). Carnegie Foundation for the Advancement of Teaching. https://eric.ed.gov/?id=ED449157 Hutchings, P. (2011). From Departmental to Disciplinary Assessment: Deepening Faculty Engagement. Change: The Magazine of Higher Learning, 43(5), 36–43. https://doi.org/10.1080/00091383.2011.599292 Hutchings, P., Borin, P., Keesing-Styles, L., Martin, L., Michael, R., Scharff, L., Simkins, S., & Ismail, A. (2013). The Scholarship of Teaching and Learning in an age of accountability: Building bridges. Teaching & Learning Inquiry: The ISSOTL Journal, 1(2), 35–47. JSTOR. https://doi.org/10.2979/teachlearninqu.1.2.35 Hutchings, P., Kinzie, J., & Kuh, G. D. (2015). Evidence of student learning: What counts and what matters for improvement. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. https://www.learningoutcomesassessment.org/wp-content/uploads/2019/08/Viewpoint- HutchingsKinzieKuh.pdf Ikenberry, S. O., & Kuh, G. D. (2015). From compliance to ownership: Why and how universities assess student learning. In G. D. Kuh, S. O. Ikenberry, N. A. Jankowski, T. R. Cain, P. T. Ewell, P. Hutchings, & J. Kinzie (Eds.), Using evidence of student learning to improve higher education (pp. 1–23). Jossey-Bass. Jankowski, N. A. (2020). Assessment during a crisis: Responding to a global pandemic (National Institute for Learning Outcomes Assessment. 340 Education Building MC 708, 1310 South Sixth Street, Champaign, IL 61820. Tel: 217-244-2155; Fax: 217-244-5632; Web site: http://www.learningoutcomeassessment.org; p. 38). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA); ERIC. http://search.ebscohost.com.proxybz.lib.montana.edu/login.aspx?direct=true&db=eric&A N=ED608798&site=ehost-live Jankowski, N. A., & Baker, G. R. (2019). Building a narrative via evidence-based storytelling: A toolkit for practice. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomesassessment.org/evidence-based-storytelling/ Jankowski, N. A., & Marshall, D. W. (2017). Degrees that matter: Moving higher education to a learning systems paradigm (1st ed.). Stylus Publishing. 13 4 Jankowski, N. A., Timmer, J. D., Kinzie, J., & Kuh, G. D. (2018). Assessment that matters: Trending toward practices that document authentic student learning. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomesassessment.org/documents/NILOA2018SurveyReport.pdf Jonson, J. L., Thompson, R. J., Guetterman, T. C., & Mitchell, N. (2017). The effect of informational characteristics and faculty knowledge and beliefs on the use of assessment. Innovative Higher Education, 42(1), 33–47. ProQuest Central; ProQuest One Academic. https://doi.org/10.1007/s10755-016-9366-7 Karabulut-Ilgu, A., AlZoubi, D., & Baran, E. (2021). Exploring engineering faculty’s use of active-learning strategies in their teaching. Faculty Perspectives of Active Learning, Inequity, and Curricular Change, 11. https://peer.asee.org/37140 Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for learning. Science, 319(5865), 966–968. https://doi.org/10.1126/science.1152408 Kezar, A. (2013). Institutionalizing student outcomes assessment: The need for better research to inform practice. Innovative Higher Education, 38(3), 189–206. https://doi.org/10.1007/s10755-012-9237-9 Kezar, A. J., & Eckel, P. D. (2002). The effect of institutional culture on change strategies in higher education: Universal principles or culturally responsive concepts? The Journal of Higher Education, 73(4), 435–460. https://doi.org/10.1353/jhe.2002.0038 Kinzie, J., & Jankowski, N. A. (2015). Making assessment consequential: Organizing to yield results. In G. D. Kuh, S. O. Ikenberry, N. A. Jankowski, T. R. Cain, P. T. Ewell, P. Hutchings, & J. Kinzie, Using evidence of student learning to improve higher education. Jossey-Bass. Knowles, M. S., Holton, E. F. I., & Swanson, R. A. (2005). The adult learner: The definitive classic in adult education and human resource development (6th ed.). Elsevier. https://ebookcentral.proquest.com/lib/montana/detail.action?docID=232125 Kolb, D. A. (1984). The process of experiential learning. In The experiential learning: Experience as the source of learning and development. Prentice-Hall. Kuh, G. D., Ikenberry, S. O., Jankowski, N. A., Cain, T. R., Ewell, P. T., Hutchings, P., & Kinzie, J. (2015). Beyond compliance: Making assessment matter. Change: The Magazine of Higher Learning, 47(5), 8–17. https://doi.org/10.1080/00091383.2015.1077661 Lachal, J., Revah-Levy, A., Orri, M., & Moro, M. R. (2017). Metasynthesis: An original method to synthesize qualitative literature in psychiatry. Frontiers in Psychiatry, 8, 269–269. PubMed. https://doi.org/10.3389/fpsyt.2017.00269 13 5 Lakos, A., & Phipps, S. E. (2004). Creating a Culture of Assessment: A Catalyst for Organizational Change. Portal: Libraries and the Academy, 4(3), 345–361. https://doi.org/10.1353/pla.2004.0052 Leaderman, E. C., & Polychronopoulos, G. B. (2019). Humanizing the assessment process: How the RARE model informs best practices. Research & Practice in Assessment, 14(Summer), 30–40. https://www.rpajournal.com/humanizing-the-assessment-process- how-the-rare-model-informs-best-practices/ Levi-Strauss, C. (1968). The savage mind. Chicago, IL: University of Chicago Press. Maki, P. (2010). Assessing for learning: Building a sustainable commitment across the institution (2nd ed.). Stylus Publishing. Marrujo-Duck, L. E. (2017). Talking ourselves into it: Faculty perspectives of student learning outcome (SLO) assessment, learning, and equity. Intersection, 1(2), 25–31. https://aalhe.memberclicks.net/assets/docs/Winter2017_Intersection.pdf Maslow, A. H. (1970). Motivation and personality (2d ed.). Harper & Row. Maxwell, J. A. (2013). Qualitative research design: An interactive approach (3rd ed.). SAGE Publications. Methley, A. M., Campbell, S., Chew-Graham, C., McNally, R., & Cheraghi-Sohi, S. (2014). PICO, PICOS and SPIDER: A comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Services Research, 14(1), 579. https://doi.org/10.1186/s12913-014-0579-0 Mezirow, J. (1994). Understanding transformation theory. Adult Education Quarterly, 44(4), 222–232. https://doi.org/10.1177%2F074171369404400403 Mezirow, J. (1997). Transformative learning: Theory to practice. New Directions for Adult and Continuing Education, 74, 5–12. https://doi.org/10.1002/ace.7401 Miller, J. P. (2006). Educating for wisdom and compassion: Creating conditions for timeless learning. Corwin Press. Montenegro, E., & Jankowski, N. A. (2020). A new decade for assessment: Embedding equity into assessment praxis (Occasional Paper No. 42). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomesassessment.org/wp-content/uploads/2020/01/A-New- Decade-for-Assessment.pdf National Institute for Learning Outcomes Assessment. (2011). Transparency Framework. University of Illinois and Indiana University, National Institute for Learning Outcomes 13 6 Assessment (NILOA). http://www.learningoutcomesassessment.org/TransparencyFramework.htm Newhart, D. W. (2015). To learn more about learning: The value-added role of qualitative approaches to assessment. Research & Practice in Assessment, 10(1), 5. http://www.rpajournal.com/dev/wp-content/uploads/2015/06/A1.pdf Ndoye, A., & Parker, M. A. (2010). Creating and sustaining a culture of assessment. Planning for Higher Education, 38(2), 28–39. https://www.scup.org/journal/pdfs/V38N2_28- 39_Ndoye-Parker.pdf Noblit, G. (2017). Third guiding process: Synthesizing new-self insights with MICA. In S. A. Hughes & J. L. Pennington (Eds.), Autoethnography: Process, Product, and Possibility for Critical Social Research (pp. 110–142). SAGE Publications, Inc. https://doi.org/10.4135/9781483398594 Noblit, G. W., & Hare, D. R. (1988). Meta-ethnography: Synthesizing qualitative studies. Newbury Park, CA: SAGE. No Child Left Behind Act of 2001, P.L. 107-110, 20 U.S.C. § 6319 (2002). Novak, J. D., & Gowin, D. B. (1984). Learning how to learn. Cambridge University Press. Olson, M. H., & Hergenhahn, B. R. (2012). Introduction to theories of learning. In Introduction to theories of learning (9th ed.). Routledge. https://ebookcentral.proquest.com/lib/montana/detail.action?docID=3570010 Ong, M., Jaumot‐Pascual, N., & Ko, L. T. (2020). Research literature on women of color in undergraduate engineering education: A systematic thematic synthesis. Journal of Engineering Education, 109(3), 581–615. https://doi.org/10.1002/jee.20345 Ouellett, M. L. (2010). Overview of faculty development. In K. J. Gillespie & D. L. Robertson (Eds.), A guide to faculty development (2nd ed., pp. 3–20). Jossey-Bass. Paul, L. A., & Quiggin, J. (2020). Transformative Education. Educational Theory, 70(5), 561– 579. https://doi.org/10.1111/edth.12444 Pelletreau, K. N., Knight, J. K., Lemons, P. P., McCourt, J. S., Merrill, J. E., Nehm, R. H., Prevost, L. B., Urban-Lurain, M., & Smith, M. K. (2018). A faculty professional development model that improves student learning, encourages active-learning instructional practices, and works for faculty at multiple institutions. CBE - Life Sciences Education, 17(2). ERIC. https://doi.org/10.1187/cbe.17-12-0260 Peterson, M. W., & Vaughan, D. S. (2002). Promoting academic improvement: Organizational and administrative dynamics that support student assessment. In T. W. Banta (Ed.), Building a scholarship of assessment (1st ed, pp. 26–48). Jossey-Bass. 13 7 Poole, B. (2010). Quality, semantics and the two cultures. Quality Assurance in Education, 18(1), 6–18. https://doi.org/10.1108/09684881011015963 Poole, G. (2018). Using intuition, anecdote, and observation: Rich sources of SoTL projects. In N. L. Chick (Ed.), SoTL in action: Illuminating critical moments of practice (1st ed., pp. 7–14). Stylus. Prince, M. (2004). Does Active Learning Work? A Review of the Research. Journal of Engineering Education, 93(3), 223–231. https://doi.org/10.1002/j.2168- 9830.2004.tb00809.x Provezis, S. (2010). Regional accreditation and student learning outcomes: Mapping the territory. (Occasional Paper No. 6). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). https://www.learningoutcomesassessment.org/wp- content/uploads/2019/02/OccasionalPaper6.pdf Rhodes, T. L. (2016). The VALUE of Assessment: Transforming the Culture of Learning. Change: The Magazine of Higher Learning, 48(5), 36–43. https://doi.org/10.1080/00091383.2016.1227674 Rhodes, T. (2017). The VALUE of Learning. Liberal Education, 103(1), 22. Roscoe, D. D. (2017). Toward an improvement paradigm for academic quality. Liberal Education, 103(1), 14. https://www.aacu.org/liberaleducation/2017/winter/roscoe Saldaña, J. (2021). The coding manual for qualitative researchers (4th ed.). SAGE Publishing Ltd. Sandelowski, M., & Barroso, J. (2003). Writing the proposal for a qualitative research methodology project. Qualitative Health Research, 13(6), 781–820. https://doi.org/10.1177/1049732303013006003 Sandelowski, M., Docherty, S., & Emden, C. (1997). Qualitative metasynthesis: Issues and techniques. Research in Nursing & Health, 20(4), 365–371. https://doi.org/10.1002/(SICI)1098-240X(199708)20:4<365::AID-NUR9>3.0.CO;2-E Savin-Baden, M., & Major, C. H. (2013). Qualitative research: The essential guide to theory and practice. Routledge. Schreier, M., Atkinson, P., Delamont, S., Cernat, A., Sakshaug, J. W., & Williams, R. A. (2020). Content analysis, qualitative. https://methods.sagepub.com/foundations/qualitative- content-analysis Schunk, D. H. (2020). Learning theories: An educational perspective (8th ed.). Pearson. 13 8 Shavelson, R. J. (2007). A brief history of student learning assessment: How we got where we are and a proposal for where to go next. Association of American Colleges and Universities. https://web.stanford.edu/dept/SUSE/SEAL/Reports_Papers/Shavelson_AcadTransition.p df Shulman, L. S. (1993). Teaching as community property: Putting an end to pedagogical solitude. Change: The Magazine of Higher Learning, 25(6), 6–7. https://doi.org/10.1080/00091383.1993.9938465 Schwartz, B. M., & Haynie, A. (2013). Faculty Development Centers and the Role of SoTL: Faculty Development Centers and the Role of SoTL. New Directions for Teaching and Learning, 2013(136), 101–111. https://doi.org/10.1002/tl.20079 Skidmore, S. T., Hsu, H.-Y., & Fuller, M. (2018). A person-centred approach to understanding cultures of assessment. Assessment & Evaluation in Higher Education, 43(8), 1241– 1257. https://doi.org/10.1080/02602938.2018.1447082 Smith, R. (2001). Expertise and the Scholarship of Teaching. New Directions for Teaching and Learning, 2001(86), 69–78. https://doi.org/10.1002/tl.17 Stanny, C. J. (2018). Promoting an improvement culture. University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. https://www.learningoutcomesassessment.org/wp-content/uploads/2019/08/Viewpoints- Stanny.pdf Suskie, L. A. (2014). Five dimensions of quality: A common sense guide to accreditation and accountability. Jossey-Bass. Suskie, L. A. (2018). Assessing student learning: A common sense guide (3rd ed.). Jossey-Bass. Thorne, S. (2008). Meta-synthesis. In L. Given (Ed.), The SAGE Encyclopedia of Qualitative Research Methods (pp. 511–513). SAGE Publications, Inc. https://doi.org/10.4135/9781412963909 Thorne, S., Jensen, L., Kearney, M. H., Noblit, G., & Sandelowski, M. (2004). Qualitative metasynthesis: Reflections on methodological orientation and ideological agenda. Qualitative Health Research, 14(10), 1342–1365. https://doi.org/10.1177/1049732304269888 Tinnell, T. L., Ralston, P. A. S., Tretter, T. R., & Mills, M. E. (2019). Sustaining pedagogical change via faculty learning community. International Journal of STEM Education, 6(1), 26. https://doi.org/10.1186/s40594-019-0180-5 13 9 U.S. Department of Education. (2006). A test of leadership: Charting the future of U.S. higher education. Washington, D.C. https://www2.ed.gov/about/bdscomm/list/hiedfuture/reports/pre-pub-report.pdf Volkwein, J. Fredericks (2003). Implementing outcomes assessment on your campus. Research and Planning EJournal, 1. http://www.rpgroup.org/Publications/eJournal/Volume_1/volume_1.htm Volkwein, J. F. (2011). Gaining ground: The role of institutional research in assessing student outcomes and demonstrating institutional effectiveness (Occasional Paper No. 11). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. https://www.learningoutcomesassessment.org/wp- content/uploads/2019/02/OccasionalPaper11.pdf Walvoord, B. E. (2010). Assessment clear and simple: A practical guide for institutions, departments, and general education (2nd ed). Jossey-Bass. Wehlburg, C. M. (2010). Assessment practices related to student learning. In K. J. Gillespie & D. L. Robertson (Eds.), A guide to faculty development (2nd ed., pp. 169–184). Jossey-Bass. Weimer, M. (2001). Learning more from the wisdom of practice. New Directions for Teaching and Learning, 2001(86), 45–56. https://doi.org/10.1002/tl.15 Weiner, W. F. (2009). Establishing a Culture of Assessment. Academe, 95(4), 28–32. JSTOR. http://www.jstor.org/stable/40253350 Welsh, J. (2018). Interview Dr. Barbara Johnson, Vice President at the Higher Learning Commission. Intersection, Special edition, 8-10. https://aalhe.memberclicks.net/assets/docs/AAHLE_Special_Edition_2018_I.pdf Welsh, J. F., & Metcalf, J. (2003). Faculty and administrative support for institutional effectiveness activities: A bridge across the chasm? The Journal of Higher Education, 74(4), 445–468. https://doi.org/10.1080/00221546.2003.11780856 Wheat, C. A., Sun, Y., Wedgworth, J. C., & Hocutt, M. M. (2018). Active university teaching and engaged student learning: A mixed methods approach. The Journal of Scholarship of Teaching and Learning, 18(4), 28. https://doi.org/10.14434/josotl.v18i4.22784 Wiggins, G. P. (1998). Ensuring authentic performance. In Educative Assessment: Designing Assessments to Inform and Improve Student Performance (1st ed., pp. 21–42). Jossey- Bass; eBook Collection (EBSCOhost). http://search.ebscohost.com.proxybz.lib.montana.edu/login.aspx?direct=true&db=nlebk& AN=26052&site=ehost-live Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3–14. https://doi.org/10.1016/j.stueduc.2011.03.001 14 0 Wiliam, D. (2017). Learning and assessment: A long and winding road? Assessment in Education: Principles, Policy & Practice, 24(3), 309–316. https://doi.org/10.1080/0969594X.2017.1338520 Willett, G., Iverson, E. R., Rutz, C., & Manduca, C. A. (2014). Measures matter: Evidence of faculty development effects on faculty and student learning. Assessing Writing, 20, 19– 36. https://doi.org/10.1016/j.asw.2013.12.001 Wilson, S. M., & Peterson, P. L. (2006). Theories of learning and teaching: What do they mean for educators? National Education Association. https://files.eric.ed.gov/fulltext/ED495823.pdf Worthen, M. (2018, February 23). The misguided drive to measure ‘learning outcomes.’ The New York Times. https://www.nytimes.com/2018/02/23/opinion/sunday/colleges- measure-learning-outcomes.html Zimmer, L. (2006). Qualitative meta-synthesis: A question of dialoguing with texts. Journal of Advanced Nursing, 53(3), 311–318. https://doi.org/10.1111/j.1365-2648.2006.03721.x 14 1 APPENDICES 14 2 APPENDIX A RQ1 SEARCH STRATEGIES 14 3 RQ1 Exploratory Searches Total Total Database Relevant Search Strategy Results Results Primo Search everything: (“Higher education” AND “culture of 5 0 (MSU assessment” AND “improved student learning outcomes”) Library AND (questionnaire* OR survey* OR interview* OR Catalog) focus group OR case stud* OR observational stud* OR descriptive research OR phenomenolog* OR grounded theory ) AND (qualitative OR “mixed method*”) Modified SPIDER search: Search everything (“higher 8 0 education” AND “culture of assessment” AND “improved student learning outcomes”) AND (questionnaire* OR survey* OR interview* OR focus group OR case stud* OR observational stud* OR descriptive research OR phenomenolog* OR grounded theory OR qualitative OR “mixed method*”) Primo Search everything: Sample & PofI keywords, “peer- 843 200 after search 1 reviewed journals” manual Filter review, ⁃ “higher education” includes FacDev results Alternative search everything and the search terms (“Culture of 201 17 after Primo assessment” OR “assessment culture”) AND (“student manual search 1 learning outcomes improvement” OR “improved student review learning outcomes") + Filters ⁃ “peer-reviewed journals” ⁃ “higher education” Primo (PofI to address each phenomenon individually AND 6 2 after search 2 “learning improvement”) AND (“questionnaire*” OR removing “survey*” OR “interview*” OR “focus group*” OR “case duplicates stud*” OR “observ*” OR “grounded theory”) AND (qualitative research or qualitative study or qualitative methods OR “mixed method*”) ⁃ Assessment culture= 1 result ⁃ Culture of assessment= 2 results ⁃ Student learning outcomes assessment= 3 results ⁃ Culture of improvement= 0 results ⁃ Culture of evidence= 0 results ⁃ Outcome-based education= 0 results ⁃ Institutional assessment= 2 results ⁃ Institutional effectiveness= 3 results ⁃ Program assessment= 4 results 14 4 Total Total Database Relevant Search Strategy Results Results PofI to address each phenomenon individually AND 18 2 “improved student learning” AND (“questionnaire*” OR “survey*” OR “interview*” OR “focus group*” OR “case stud*” OR “observ*” OR “grounded theory”) AND (qualitative research or qualitative study or qualitative methods OR “mixed method*”) ⁃ Assessment culture= 2 results ⁃ Culture of assessment= 4 results ⁃ Student learning outcomes assessment= 3 results ⁃ Culture of improvement= 0 results ⁃ Culture of evidence= 1 result ⁃ Outcome-based education= 0 results ⁃ Institutional assessment= 3 results ⁃ Institutional effectiveness= 3 results ⁃ Program assessment= 5 results Primo Any field contains (“learning improvement” OR 8 search 3: “improved student learning”) AND added Any field contains (“assessment culture” OR “culture of “United assessment” OR “institutional effectiveness”) AND Any States” as field contains ("United States") search Filters: term ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Higher Education ⁃ Education ⁃ Higher Education Administration ⁃ Educational Leadership ⁃ Educational Evaluation ⁃ Educational Assessment ⁃ Accreditation ⁃ Accountability ⁃ Educational Administration ⁃ Education & Educational Research ⁃ Community College Education ⁃ Assessment ⁃ Years: 2013-2021 Removed “United States”, removed all filters, kept all 115 other search terms Retained same search terms, added the following filters: 27 Few—most ⁃ Peer-reviewed Journals were ⁃ Articles international ⁃ Dissertations ⁃ Higher Education ⁃ Education 14 5 Total Total Database Relevant Search Strategy Results Results ⁃ Assessment ⁃ Educational Evaluation ⁃ Educational Leadership ⁃ Education & Educational Research ⁃ Educational Assessment ⁃ Accountability ⁃ Evaluation Methods ⁃ Questionnaires ⁃ Years: 2013-2021 ⁃ Case Studies ⁃ Formative Assessment ⁃ Formative Evaluation ⁃ Student Evaluation ⁃ Learning ⁃ Teaching ⁃ College Students Primo Any field contains (“learning improvement” OR 151 Few—most search 4 “improved student learning”) AND were related Any field contains (“culture of assessment”) to medical Filters: 54 field ⁃ Educational Assessment ⁃ Case Studies ⁃ Leadership ⁃ Questionnaires ⁃ Organizational Culture ⁃ Education & Educational Research ⁃ Accountability ⁃ Assessment ⁃ Educational Evaluation ⁃ Educational Leadership ⁃ Education ⁃ Learning ⁃ Higher Education ⁃ 2013-2021 ⁃ Articles ⁃ Dissertations ⁃ Conference Proceedings ⁃ Peer-reviewed Journals ⁃ Culture ⁃ Analysis ⁃ Social Sciences Modified search to “student learning” AND “culture of 252 assessment” Filters: ⁃ Peer-reviewed Journals 14 6 Total Total Database Relevant Search Strategy Results Results ⁃ Articles ⁃ Conference Proceedings ⁃ Reports ⁃ Years: 2013-2022 ⁃ Higher Education ⁃ Assessment ⁃ Colleges & Universities ⁃ College Faculty ⁃ United States ⁃ Education, Higher ⁃ Accreditation ⁃ College Students 398 excluded 70 total relevant after reviewing all sources found in Primo searches ERIC Search everything: Sample & PofI keywords, “peer- 49 3 reviewed journals” Filters ⁃ “qualitative research” ⁃ “mixed methods research” All SPIDER terms (“Higher education” AND “culture of 38 0 assessment” AND “improved student learning outcomes”) AND (questionnaire OR survey OR interview OR focus group OR case study OR observational study) AND (qualitative OR “mixed method”) Modified (“higher education”) AND (“assessment culture” OR 3 n/a ERIC “culture of assessment” OR “student learning outcomes search 1 assessment” OR “culture of improvement” OR “culture of evidence” OR “outcome-based education” OR “institutional assessment”) AND (“learning improvement” OR “improved student learning”) Filters ⁃ “peer reviewed” ⁃ “publication date range” (2013-2021) expanded the search to “find any search terms” and “apply 51 n/a equivalent subjects” and modified the PofI as “institutional culture” or “assessment culture” or “culture of assessment” Modified Modified PofI to address each phenomenon individually 434 122 search 2 AND “learning improvement” ⁃ Assessment culture=14 results ⁃ Culture of assessment=23 results ⁃ Student learning outcomes assessment=18 results ⁃ Culture of improvement=3 results ⁃ Culture of evidence=10 results 14 7 Total Total Database Relevant Search Strategy Results Results ⁃ Outcome-based education=75 results ⁃ Institutional assessment=200 results ⁃ Institutional effectiveness=23 results ⁃ Program assessment=68 Modified PofI to address each phenomenon individually 327 140 AND “improved student learning” ⁃ Assessment culture= 24 results ⁃ Culture of assessment= 24 results ⁃ Student learning outcomes assessment= 60 results ⁃ Culture of improvement= 60 results ⁃ Culture of evidence= 17 results ⁃ Outcome-based education= 59 results ⁃ Institutional assessment=23 results ⁃ Institutional effectiveness=25 results ⁃ Program assessment=35 results Modified “culture of assessment” AND “improved student learning” 12 8 search 3 Filters: ⁃ apply related words, ⁃ 2013-2022, ⁃ all journals & documents, ⁃ higher education/post secondary education/ two year colleges, ⁃ collected works: proceedings/information analyses/journal articles/reports (all)/speeches_meeting papers Added “improved learning outcomes” 465 0 Filters: ⁃ post secondary education, ⁃ teaching methods, ⁃ outcomes of education, ⁃ undergraduate students, ⁃ college students, ⁃ instructional effectiveness, ⁃ academic achievement, ⁃ college faculty, ⁃ questionnaires, ⁃ case studies, ⁃ qualitative research, ⁃ program effectiveness, ⁃ educational change, ⁃ faculty development, ⁃ interviews, ⁃ educational improvement, ⁃ educational quality, ⁃ program evaluation 14 8 Total Total Database Relevant Search Strategy Results Results Modified (“culture of assessment”) AND (“learning outcomes” OR 29 search 4 “learning effectiveness” OR “educational outcomes”) NOT (“foreign countries”) Filters: ⁃ post secondary education, ⁃ teaching methods, ⁃ outcomes of education, ⁃ undergraduate students, ⁃ college students, ⁃ instructional effectiveness, ⁃ academic achievement, ⁃ college faculty, ⁃ questionnaires, ⁃ case studies, ⁃ qualitative research, ⁃ program effectiveness, ⁃ educational change, ⁃ faculty development, ⁃ interviews, ⁃ educational improvement, ⁃ educational quality, ⁃ program evaluation 597 excluded 70 total relevant after reviewing all sources found in ERIC searches Google Google Scholar search for culture of assessment + 3 2 Scholar improved student learning outcomes + 2013-2022 + higher education + “not dissertation” “higher education” AND “culture of assessment” AND 73 qualitative AND “learning improvement” OR “improved learning” OR “improved outcomes” NOT dissertation, NOT book Connected Entered bibliographic information for Kezar (2013) 7 3 Papers (cited by) Google Entered bibliographic information for Kezar (2013) 46 2 Scholar (cited by) 14 9 RQ1 Scoping Searches Total Total Database Search Strategy Relevant Results Results Scoping search "culture of assessment" OR "assessment culture" OR 236 54 in library "culture of evidence" OR "evidence* culture" catalog Filters: (Primo) using ⁃ Years: 2013-2021 “Title” ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Conference Proceedings ⁃ Reports ⁃ English “improved student learning outcomes” OR "learning 1676 239 improve*" OR "student improve*" Filters: ⁃ Years: 2013-2021 ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Conference Proceedings ⁃ Reports ⁃ English "higher education" OR college OR undergraduate 325,402 n/a OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters ⁃ Years: 2013-2021 ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Conference Proceedings ⁃ Reports ⁃ English Full search in Title contains (“culture of assessment" OR 477 72 Title, using all "assessment culture" OR "culture of evidence" OR search terms "evidence* culture”) OR above (using (“improved student learning outcomes” OR "learning AND for improve*" OR "student improve*”) AND “culture of (“higher education" OR college OR undergraduate assessment” OR "community college" OR "two-year college" OR and “improved "two-year education" OR "four-year college" OR learning "four-year education" OR university) outcomes” Filters offered 0 ⁃ Years: 2013-2021 results) ⁃ Peer-reviewed Journals ⁃ Articles 15 0 Total Total Database Search Strategy Relevant Results Results ⁃ Conference Proceedings ⁃ Reports ⁃ English Repeated Title (“improved student learning outcomes” OR "learning 18 18 search for improve*" OR "student improve*”) AND improved (“higher education" OR college OR undergraduate learning OR "community college" OR "two-year college" OR outcomes + "two-year education" OR "four-year college" OR higher ed "four-year education" OR university) Filters ⁃ Years: 2013-2021 ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Conference Proceedings ⁃ Reports ⁃ English Repeated Title ("culture of assessment" OR "assessment culture" 76 15 search for OR "culture of evidence" OR "evidence* culture") culture of AND assessment + (“higher education" OR college OR undergraduate higher ed OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university) Filters ⁃ Years: 2013-2021 ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Conference Proceedings ⁃ Reports ⁃ English Repeated Title ("culture of assessment" OR "assessment culture" 0 search for OR "culture of evidence" OR "evidence* culture") culture of AND assessment + (“improved student learning outcomes” OR "learning improved improve*" OR "student improve*”) learning outcomes Scoping search “culture of assessment" OR "assessment culture" OR 53 32 in library "culture of evidence" OR "evidence* culture" catalog Filters (Primo) using ⁃ Years: 2013-2021 “Subject” ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Conference Proceedings 15 1 Total Total Database Search Strategy Relevant Results Results ⁃ English “improved student learning outcomes” OR "learning 26,535 n/a improve*" OR "student improve*" OR “learning outcomes assessment” OR “outcomes assessment” Filters ⁃ Years: 2013-2021 ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Conference Proceedings ⁃ Reports ⁃ English "higher education" OR college OR undergraduate 428,753 n/a OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters ⁃ Years: 2013-2021 ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Conference Proceedings ⁃ Reports ⁃ English Repeated (“improved student learning outcomes” OR "learning 1 0 Subject search improve*" OR "student improve*”) AND for improved (“higher education" OR college OR undergraduate learning OR "community college" OR "two-year college" OR outcomes + "two-year education" OR "four-year college" OR higher ed "four-year education" OR university) Repeated ("culture of assessment" OR "assessment culture" 20 16 Subject search OR "culture of evidence" OR "evidence* culture") for culture of AND assessment + (“higher education" OR college OR undergraduate higher ed OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university) Filters ⁃ Years: 2013-2021 ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Conference Proceedings ⁃ Reports ⁃ English 15 2 Total Total Database Search Strategy Relevant Results Results Repeated ("culture of assessment" OR "assessment culture" 1 1 Subject search OR "culture of evidence" OR "evidence* culture") for culture of AND assessment + (“improved student learning outcomes” OR "learning improved improve*" OR "student improve*”) learning outcomes Full search in Subject, using all search terms 26,551 n/a above (using AND for “culture of assessment” and “improved learning outcomes” offered 0 results) Subject contains (“culture of assessment" OR "assessment culture" OR "culture of evidence" OR "evidence* culture”) OR (“improved student learning outcomes” OR "learning improve*" OR "student improve*”) AND (“higher education" OR college OR undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university) Filters ⁃ Years: 2013-2021 ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Conference Proceedings ⁃ Reports ⁃ English A survey of the results indicated that the majority were related to healthcare. Filtering with the subject term “Outcome assessment” did not offer education- related results. Did not upload any citations to Zotero. ERIC: scoping Abstracts: 173 searches only "culture of assessment" OR "assessment culture" OR conducted for "culture of evidence" OR "evidence* culture" OR culture of “organizational culture” assessment, as Filters the other key ⁃ Apply related words and subjects terms are less ⁃ Date Published: 20130101-20211231 difficult to ⁃ Academic Journals find. 2-string ⁃ ERIC Documents searches (e.g., ⁃ Reports 15 3 Total Total Database Search Strategy Relevant Results Results CofA+HE) ⁃ Language: English were ⁃ Peer reviewed conducted only ⁃ Higher education—added because many of the when a 3- results were related to medical/corporate string search environments did not yield ⁃ Postsecondary education results or ⁃ Universities yielded very few (less than 10). Full abstract search in ERIC (CofA+outcomes+HE) 142 137 Scoping search “culture of assessment" OR "assessment culture" OR 57 in ERIC- Title "culture of evidence" OR "evidence* culture" OR “organizational culture” Filters ⁃ Apply related words and subjects ⁃ Date Published: 20130101-20211231 ⁃ Academic Journals ⁃ ERIC Documents ⁃ Reports ⁃ Language: English ⁃ Peer reviewed ⁃ Higher education—added because many of the results were related to medical/corporate environments ⁃ Postsecondary education ⁃ Universities Full title search in ERIC (CofA+outcomes+HE) 20 20 Scoping search “culture of assessment" OR "assessment culture" OR 720 50 in ERIC- Text "culture of evidence" OR "evidence* culture" OR “organizational culture” Filters ⁃ Apply related words and subjects ⁃ Date Published: 20130101-20211231 ⁃ Academic Journals ⁃ ERIC Documents ⁃ Reports ⁃ Language: English ⁃ Peer reviewed ⁃ Higher education—added because many of the results were related to medical/corporate environments ⁃ Postsecondary education ⁃ Universities 15 4 Total Total Database Search Strategy Relevant Results Results Full search in ERIC- Text (CofA+outcomes+HE) 861 65 Scoping search “culture of assessment" OR "assessment culture" OR 638 63 in ERIC- "culture of evidence" OR "evidence* culture" OR Subject “organizational culture” Filters ⁃ Apply related words and subjects ⁃ Date Published: 20130101-20211231 ⁃ Academic Journals ⁃ ERIC Documents ⁃ Reports ⁃ Language: English ⁃ Peer reviewed ⁃ Higher education—added because many of the results were related to medical/corporate environments ⁃ Postsecondary education ⁃ Universities Full subject search in ERIC (CofA+outcomes+HE) 208 Educator’s "culture of assessment" OR "assessment culture" OR 15 15 Reference "culture of evidence" OR "evidence* culture" OR Complete “organizational culture” Abstracts Filters ⁃ English ⁃ Peer-Reviewed ⁃ Document Type : "Abstract" Or "Article" Or "Case study" Or "Conference notes" Or "Report" ⁃ Date : 2013 - 2022 Educator’s “culture of assessment" OR "assessment culture" OR 0 0 Reference "culture of evidence" OR "evidence* culture" OR Complete “organizational culture” Title Educator’s “culture of assessment" OR "assessment culture" OR 26 26 Reference "culture of evidence" OR "evidence* culture" OR Complete “organizational culture” Text Web of Science (AB=("culture of assessment" OR "assessment 78 78 Abstracts culture" OR "culture of evidence" OR "evidence* culture")) AND (DT==("ARTICLE" OR "ABSTRACT") AND SJ==("EDUCATION EDUCATIONAL RESEARCH" OR "SOCIAL SCIENCES OTHER TOPICS") AND LA==("ENGLISH") AND SJ==("EDUCATION EDUCATIONAL RESEARCH")) Web of Science (TI=("culture of assessment" OR "assessment 14 14 Title culture" OR "culture of evidence" OR "evidence* 15 5 Total Total Database Search Strategy Relevant Results Results culture")) AND (DT==("ARTICLE" OR "ABSTRACT") AND SJ==("EDUCATION EDUCATIONAL RESEARCH" OR "SOCIAL SCIENCES OTHER TOPICS") AND LA==("ENGLISH") AND SJ==("EDUCATION EDUCATIONAL RESEARCH")) Web of Science (TS=("culture of assessment" OR "assessment 85 85 Topic (Text culture" OR "culture of evidence" OR "evidence* search was not culture")) AND (DT==("ARTICLE" OR an option) "ABSTRACT" OR "REPORT") AND (saved as SJ==("EDUCATION EDUCATIONAL subject in RESEARCH" OR "SOCIAL SCIENCES OTHER Zotero) TOPICS") AND LA==("ENGLISH") AND SJ==("EDUCATION EDUCATIONAL RESEARCH")) Sociological "culture of assessment" OR "assessment culture" OR 9 9 Abstracts "culture of evidence" OR "evidence* culture" OR “organizational culture” Abstract search Filters (full text not ⁃ English an option) ⁃ Peer-Reviewed ⁃ Source Type : conference papers & proceedings, scholarly journals ⁃ Date : 2013 - 2022 ⁃ exclude: corporate culture ⁃ include: higher education, colleges & universities, education Sociological “culture of assessment" OR "assessment culture" OR 0 0 Abstracts "culture of evidence" OR "evidence* culture" Title (removed organizational culture)—only results were for org. culture and were not related to education Sociological “culture of assessment" OR "assessment culture" OR 5 5 Abstracts "culture of evidence" OR "evidence* culture" OR All Subjects & “organizational culture” Indexing Sociological “culture of assessment" OR "assessment culture" OR 83 83 Abstracts "culture of evidence" OR "evidence* culture" OR Anywhere “organizational culture” (added to Zotero with tag/collection “text”) 15 6 Total Total Database Search Strategy Relevant Results Results Psych Info "culture of assessment" OR "assessment culture" OR 17 17 Abstract "culture of evidence" OR "evidence* culture" OR “organizational culture” Filters ⁃ English ⁃ Peer-Reviewed ⁃ Source Type: Conference Proceedings, Journal Article, Peer Reviewed Journal, Peer-Reviewed Status-Unknown ⁃ Date: 2013 - 2022 ⁃ exclude: business organizations ⁃ include: higher education Psych Info “culture of assessment" OR "assessment culture" OR 8 8 Title "culture of evidence" OR "evidence* culture" OR “organizational culture” Psych Info “culture of assessment" OR "assessment culture" OR 2311 38 Anywhere "culture of evidence" OR "evidence* culture" OR many related (text search “organizational culture” to healthcare not available, Filter added to ⁃ higher education Zotero with tag/collection “text”) Psych Info “culture of assessment" OR "assessment culture" OR 1608 4 All Subjects & "culture of evidence" OR "evidence* culture" OR Indexing “organizational culture”--many related to healthcare Excluded “organizational culture” (no subject term filters related to education were available with this search term included in the search string) ProQuest "culture of assessment" OR "assessment culture" OR 3578 152 Central "culture of evidence" OR "evidence* culture" OR “organizational culture” Abstract Filters ⁃ English ⁃ Peer-Reviewed ⁃ Source Type: conference papers & proceedings, reports, scholarly journals, working papers ⁃ Date : 2013 - 2022 ⁃ higher education ⁃ learning ⁃ education ⁃ colleges & universities 15 7 Total Total Database Search Strategy Relevant Results Results ⁃ NOT corporate culture ProQuest “culture of assessment" OR "assessment culture" OR 1575 32 Central "culture of evidence" OR "evidence* culture" OR “organizational culture” Title Filters ⁃ English ⁃ peer reviewed ⁃ source type ⁃ date ⁃ higher education ⁃ colleges & universities ⁃ NOT corporate culture ProQuest (“culture of assessment" OR "assessment culture" 669 46 Central OR "culture of evidence" OR "evidence* culture" OR “organizational culture”) AND (Higher Text (FT) education OR Education OR Colleges & universities NOT corporate culture) Filters ⁃ English ⁃ Peer reviewed ⁃ 2013-2022 ⁃ Source type: Conference Papers & Proceedings, Reports, Scholarly Journals, Speeches & Presentations, Working Papers ⁃ qualitative research Modified search to exclude “organizational culture” 909 138 same source type, date range, English, peer reviewed AND (Higher education OR Education OR Colleges & universities NOT corporate culture) AND (learning OR college students OR accountability OR college campuses OR community colleges OR learning outcomes OR quality of education OR academic achievement OR college faculty) Total RQ1 citations uploaded to Zotero= 4778 15 8 RQ1 Search Results by Tag Tag Results CofA—text all databases 345 CofA--subject all databases 173 CofA+HE—abstract all databases 136 CofA+HE--title all databases 31 CofA+HE—text all databases 36 CofA+HE--subject all databases 74 CofA+outcomes—abstract all databases 3 CofA+outcomes—title all databases 0 CofA+outcomes—text all databases 14 CofA+outcomes—subject all databases 1 CofA+outcomes+HE—abstract all databases 2 CofA+outcomes+HE—title all databases 47 CofA+outcomes+HE—text all databases 13 CofA+outcomes+HE—subject all databases 0 Outcomes+HE--title all databases 12 15 9 APPENDIX B RQ2 SEARCH STRATEGIES 16 0 RQ2 Exploratory Searches Total Total Database Search Strategy Relevant Results Results Primo “Faculty development” AND “learning improvement” 630 475 (search Filters: everything) ⁃ Articles ⁃ Conference proceedings ⁃ 2013-2021 Added filters to previous search: 123 6 ⁃ peer reviewed journals ⁃ Education ⁃ Social Sciences ⁃ Education & Educational Research ⁃ Learning ⁃ Faculty Development ⁃ Professional Development ⁃ Teaching ⁃ Students ⁃ Curricula ⁃ Teachers ⁃ college ⁃ university ERIC Keywords: “faculty development” or “professional 31 27 development” AND “improve learning outcomes” Filters: ⁃ apply related words & subjects; ⁃ peer-reviewed; ⁃ 2013-2022; ⁃ all journals & documents; ⁃ higher education/post secondary/two-year colleges; ⁃ ERIC digests/ERIC publications/journal articles/reports-evaluative/reports-research/speeches &meeting papers; ⁃ English PsycInfo Keywords (“faculty development” or “professional 226 72 development”) AND “improve learning outcomes” AND (“higher education” OR “college” OR “university” OR “community college” OR “two year college”) Filters: ⁃ peer reviewed; ⁃ 2013-2022; ⁃ conference proceedings/journal/journal article/peer reviewed journal/peer reviewed status unknown; 16 1 Total Total Database Search Strategy Relevant Results Results ⁃ empirical study/field study/focus group/follow up study/ interview/longitudinal study/meta analysis/meta synthesis/nonclinical case study/qualitative study/systematic review; ⁃ English Educators Keywords: “Faculty development” AND “improve 1 1 Reference learning outcomes”, Complete Filters: ⁃ peer-reviewed journals, ⁃ 2013-2022, ⁃ articles, ⁃ case study ProQuest (“Faculty development” OR “educator professional 36 13 Central development” OR “professional development”) AND (“Improve learning outcomes” OR “improved learning”) Filters: ⁃ peer-reviewed ⁃ English ⁃ 2013-2022 ⁃ Article OR Report OR Case Study OR Conference Proceeding OR Working Paper/Pre-Print ⁃ United States--US and all US locations, excluded all non-US locations ⁃ higher education ⁃ college students ⁃ colleges & universities ⁃ community colleges ⁃ college campuses Google Scholar learning "higher education" "faculty development" 15 0 “improve* outcomes” “United States” “peer reviewed” AND “case study” OR “focus group” OR qualitative OR interview NOT -quantitative -dissertation -book "faculty development" "improve outcomes" qualitative 137 0 OR "mixed methods" OR college OR university OR “higher education” "learning" NOT -dissertation - quantitative -patient -healthcare “faculty development" "improve learning outcomes" 45 0 college OR university OR “higher education” qualitative OR "mixed methods" NOT -dissertation -quantitative - patient -healthcare 16 2 RQ2 Scoping Searches Total Total Database Search Strategy Relevant Results Results Scoping search “Faculty development” OR “educator professional 4448 2541 (could in library development” OR “College teachers -- In-service not export catalog (Primo) training” OR “College teachers -- Training of” OR to Zotero) using “Title” “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” Filters ⁃ Years: 2013-2021 ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Conference Proceedings ⁃ Reports ⁃ English ⁃ Added SU: ⁃ Education & educational research ⁃ Education ⁃ Higher education ⁃ Faculty development ⁃ Professional development Full Title 0 search in (“Faculty development” OR “educator professional Primo, using all development” OR “College teachers -- In-service search terms training” OR “College teachers -- Training of” OR above and “college staff development” OR “college faculty other key terms professional development” OR “educational from RQ1 development” OR “Scholarship of teaching and scoping learning” OR “SoTL” OR “professional continuing searches. education” OR “staff development”) AND (“learning Added outcomes assessment” OR “outcomes assessment” OR “learning “improved student learning outcomes” OR "learning outcomes improve*" OR "student improve*”) AND (“higher assessment” education" OR college OR undergraduate OR and “outcomes "community college" OR "two-year college" OR "two- assessment” to year education" OR "four-year college" OR "four-year learning education" OR university) improvement search string 16 3 Total Total Database Search Strategy Relevant Results Results Repeated Title (“Faculty development” OR “educator professional 1 1 search for development” OR “College teachers -- In-service “faculty training” OR “College teachers -- Training of” OR development” “college staff development” OR “college faculty AND professional development” OR “educational “improved development” OR “Scholarship of teaching and student learning” OR “SoTL” OR “professional continuing learning education” OR “staff development”) AND (“learning outcomes” outcomes assessment” OR “outcomes assessment” OR Added “improved student learning outcomes” OR "learning “learning improve*" OR "student improve*”) outcomes Filter: date range assessment” and “outcomes assessment” to learning improvement search string Repeated Title (“Faculty development” OR “educator professional 3832 190 search for development” OR “College teachers -- In-service “faculty training” OR “College teachers -- Training of” OR development” “college staff development” OR “college faculty AND “higher professional development” OR “educational education” development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development”) AND (“higher education" OR college OR undergraduate OR "community college" OR "two-year college" OR "two- year education" OR "four-year college" OR "four-year education" OR university) Filters • Years: 2013-2021 • Peer-reviewed Journals • Articles • Conference Proceedings • Reports • English • Higher Education= 273 results • Faculty Development • Professional Development • Education & Educational Research • Academic development • Educational development 16 4 Total Total Database Search Strategy Relevant Results Results Scoping search “Faculty development” OR “educator professional 13,219 268 in library development” OR “College teachers -- In-service catalog (Primo) training” OR “College teachers -- Training of” OR using “Subject” “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” Filters ⁃ Years: 2013-2021 ⁃ Peer-reviewed Journals ⁃ Articles ⁃ Conference Proceedings ⁃ Reports ⁃ English ⁃ “higher education” ⁃ “colleges & universities” ⁃ universities ⁃ “education & educational research” ⁃ “faculty development” ⁃ “professional development” ⁃ “college faculty” Full Subject 67 10 search in (“Faculty development” OR “educator professional Primo, using all development” OR “College teachers -- In-service search terms training” OR “College teachers -- Training of” OR above and “college staff development” OR “college faculty other key terms professional development” OR “educational from RQ1 development” OR “Scholarship of teaching and scoping learning” OR “SoTL” OR “professional continuing searches. education” OR “staff development”) AND (“learning Added outcomes assessment” OR “outcomes assessment” OR “learning “improved student learning outcomes” OR "learning outcomes improve*" OR "student improve*”) AND (“higher assessment” education" OR college OR undergraduate OR and “outcomes "community college" OR "two-year college" OR "two- assessment” to year education" OR "four-year college" OR "four-year learning education" OR university) improvement Filters search string ⁃ dates ⁃ peer-reviewed journals Repeated (“Faculty development” OR “educator professional 159 36 subject search development” OR “College teachers -- In-service for “faculty training” OR “College teachers -- Training of” OR development” “college staff development” OR “college faculty 16 5 Total Total Database Search Strategy Relevant Results Results AND professional development” OR “educational “improved development” OR “Scholarship of teaching and student learning” OR “SoTL” OR “professional continuing learning education” OR “staff development”) AND (“learning outcomes” outcomes assessment” OR “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*”) Filters ⁃ dates ⁃ peer-reviewed journals Repeated “Faculty development” OR “educator professional 14614 172 subject search development” OR “College teachers -- In-service for “faculty training” OR “College teachers -- Training of” OR development” “college staff development” OR “college faculty AND “higher professional development” OR “educational education” development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development”) AND (“higher education" OR college OR undergraduate OR "community college" OR "two-year college" OR "two- year education" OR "four-year college" OR "four-year education" OR university) Filters ⁃ date, ⁃ peer-reviewed journals, ⁃ articles, ⁃ conference proceedings, ⁃ reports ⁃ “Colleges & Universities” ⁃ “Higher Education” ⁃ “Universities” ⁃ “College Faculty” ⁃ “College Students” ⁃ Faculty Development ⁃ Professional Development ⁃ Communities of Practice ⁃ Education & Educational Research ⁃ Teaching Methods ⁃ Teaching ⁃ Learning ⁃ Teacher Attitudes ⁃ Educational Change ⁃ Teacher Attitudes ⁃ Pedagogy 16 6 Total Total Database Search Strategy Relevant Results Results Earlier searches (Nov. - Jan.) yielded 20 previously identified articles. Because the scoping searches in Primo yielded large numbers of results in some cases, not all citations were exported to Zotero. Of those sources that were added to Zotero after merging duplicates 719 total search results from Primo scoping searches Scoping search “Faculty development” OR “educator professional 1795 407 in ERIC- development” OR “College teachers -- In-service Abstracts training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty All searches professional development” OR “educational were limited by development” OR “Scholarship of teaching and date (2013- learning” OR “SoTL” OR “professional continuing 2022*), English, education” OR “staff development” and “peer- Filters reviewed” ⁃ source type (academic journals, reports, ERIC documents) ⁃ institution type (higher education, universities, postsecondary education, two year colleges, community colleges) ⁃ SU NOT “foreign countries” FacDev AND “learning outcomes assessment” OR 7 7 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 945 113 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters ⁃ source type (academic journals, ERIC documents, reports) ⁃ postsecondary education ⁃ college faculty ⁃ faculty development ⁃ educational development ⁃ educational practices ⁃ universities ⁃ two year colleges ⁃ professional development ⁃ community colleges ⁃ skill development ⁃ staff development ⁃ NOT “foreign countries” Scoping search “Faculty development” OR “educator professional 597 0 in ERIC- Title development” OR “College teachers -- In-service 16 7 Total Total Database Search Strategy Relevant Results Results training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” FacDev AND “learning outcomes assessment” OR 0 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 94 94 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Full search in title = 0 results 3 Removed all “learning outcomes” key words and just used the word “outcomes” Scoping search “Faculty development” OR “educator professional 15,279 in ERIC- Text development” OR “College teachers -- In-service training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” FacDev AND “learning outcomes assessment” OR 95 95 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 3848 175 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters ⁃ NOT SU “foreign countries” ⁃ higher education ⁃ faculty development ⁃ postsecondary education ⁃ college faculty ⁃ two year colleges ⁃ community colleges ⁃ Outcomes of education Full search in text (FacDev+outcomes+HE) 58 16 8 Total Total Database Search Strategy Relevant Results Results Scoping search “Faculty development” OR “educator professional 13,569 n/a in ERIC- development” OR “College teachers -- In-service Subject training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” FacDev AND “learning outcomes assessment” OR 59 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 4672 149 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters ⁃ NOT SU “foreign countries” ⁃ faculty development ⁃ postsecondary education ⁃ college faculty ⁃ two year colleges ⁃ community colleges ⁃ Qualitative research ⁃ Educational research ⁃ outcomes of education Full subject search (FacDev+outcomes+HE) 21 Scoping search “Faculty development” OR “educator professional 636 493 in PsychInfo- development” OR “College teachers -- In-service Abstracts training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” Filters ⁃ English ⁃ Peer-Reviewed ⁃ Source Type : Conference Proceedings, Journal Article, Peer Reviewed Journal, Peer-Reviewed Status-Unknown ⁃ Date : 2013 – 2022 ⁃ professional development OR teaching OR curriculum OR educational personnel OR learning OR college teachers OR higher education OR staff 16 9 Total Total Database Search Strategy Relevant Results Results development OR teaching methods OR educational programs OR program development OR education OR qualitative research OR academic achievement OR college students OR educational measurement OR program evaluation OR curriculum development OR training OR colleges OR student attitudes OR faculty OR teacher attitudes OR educational program evaluation OR learning strategies OR learning environment OR collaboration OR undergraduate education OR evidence based practice FacDev AND “learning outcomes assessment” OR 2 2 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 194 100 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters ⁃ English ⁃ Peer-Reviewed ⁃ Source Type : Peer Reviewed Journal, Journal Article ⁃ Date : 2013 – 2022 ⁃ higher education ⁃ professional development ⁃ college teachers ⁃ colleges ⁃ undergraduate education Full abstract search (FacDev+outcomes+HE) 1 1 Scoping search “Faculty development” OR “educator professional 135 86 in PsychInfo- development” OR “College teachers -- In-service Title training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” Filters ⁃ SU: professional development OR college teachers OR staff development OR higher education OR colleges OR continuing education FacDev AND “learning outcomes assessment” OR 0 “outcomes assessment” OR “improved student learning 17 0 Total Total Database Search Strategy Relevant Results Results outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 9 9 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Full title search (FacDev+outcomes+HE) 0 Scoping search “Faculty development” OR “educator professional 2036 n/a in PsychInfo- development” OR “College teachers -- In-service Anywhere (text training” OR “College teachers -- Training of” OR search not “college staff development” OR “college faculty available, professional development” OR “educational added to Zotero development” OR “Scholarship of teaching and with learning” OR “SoTL” OR “professional continuing tag/collection education” OR “staff development” “text”) FacDev AND “learning outcomes assessment” OR 4 4 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 1931 287 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters ⁃ SU higher education OR college teachers OR undergraduate education OR colleges Full “text” search (FacDev+outcomes+HE) 4 4 Scoping search “Faculty development” OR “educator professional 568 54 in PsychInfo- development” OR “College teachers -- In-service All Subjects & training” OR “College teachers -- Training of” OR Indexing “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” Filters ⁃ SU “higher education” or “colleges” FacDev AND “learning outcomes assessment” OR 152 25 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” = 0 results Modified “outcomes” search string to only include “improve*” = 10 results—all were specific to healthcare 17 1 Total Total Database Search Strategy Relevant Results Results Modified “outcomes” search string to “learning” = 152 results, added SU “higher education” or “colleges”= 25 FacDev AND “higher education" OR college OR 132 63 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters ⁃ SU higher education OR colleges OR universities OR undergraduate education Full subject search = 0 results 493 105 Tried PsychInfo recommended alternative search SU “professional development” AND SU “college teachers” = 493 results Filters ⁃ dates ⁃ peer-reviewed ⁃ SU college teachers OR professional development OR higher education OR teaching OR learning OR staff development OR teacher attitudes OR colleges OR teaching methods OR college students OR curriculum OR faculty OR program development OR teachers OR classrooms OR collaboration OR collaborative learning OR professional competence OR program evaluation OR academic environment OR educational measurement OR educational quality OR innovation OR mentors OR training OR academic settings OR college academic achievement OR continuing education OR educational programs Scoping search “Faculty development” OR “educator professional 1199 in Web of development” OR “College teachers -- In-service Science- training” OR “College teachers -- Training of” OR Abstracts “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” Filters ⁃ “Faculty development” OR “educator professional development” OR “College teachers -- In-service training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and 17 2 Total Total Database Search Strategy Relevant Results Results learning” OR “SoTL” OR “professional continuing education” OR “staff development” ⁃ (Abstract) and Articles or Proceedings Papers or Early Access (Document Types) ⁃ English (Languages) ⁃ Education Educational Research (Research Areas) and Education Educational Research (Web of Science Categories) ⁃ Book Chapters or Data Papers or Review Articles (Exclude – Document Types) FacDev AND “learning outcomes assessment” OR 7 7 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 650 650 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters ⁃ “Faculty development” OR “educator professional development” OR “College teachers -- In-service training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” ⁃ (Abstract) and “higher education" OR college OR undergraduate OR "community college" OR "two- year college" OR "two-year education" OR "four- year college" OR "four-year education" OR university ⁃ (Abstract) and Retracted Publications or Letters or Data Papers or Editorial Materials or Review Articles or Book Chapters (Exclude – Document Types) and Education Educational Research (Web of Science Categories) and Education Educational Research (Research Areas) ⁃ English (Languages) Full abstract search (FacDev+outcomes+HE) 8 8 Scoping search “Faculty development” OR “educator professional 362 362 in Web of development” OR “College teachers -- In-service Science- Title training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty professional development” OR “educational 17 3 Total Total Database Search Strategy Relevant Results Results development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” Filters ⁃ “Faculty development” OR “educator professional development” OR “College teachers -- In-service training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” ⁃ (Abstract) and “higher education" OR college OR undergraduate OR "community college" OR "two- year college" OR "two-year education" OR "four- year college" OR "four-year education" OR university ⁃ (Abstract) and Retracted Publications or Letters or Data Papers or Editorial Materials or Review Articles or Book Chapters (Exclude – Document Types) ⁃ Education Educational Research (Web of Science Categories) and Education Educational Research (Research Areas) ⁃ English (Languages) FacDev AND “learning outcomes assessment” OR 0 0 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 84 84 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters ⁃ “Faculty development” OR “educator professional development” OR “College teachers -- In-service training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” ⁃ (Title) “higher education" OR college OR undergraduate OR "community college" OR "two- year college" OR "two-year education" OR "four- 17 4 Total Total Database Search Strategy Relevant Results Results year college" OR "four-year education" OR university ⁃ (Title) Book Chapters or Review Articles or Meeting Abstracts or Book Reviews or Books or Editorial Materials (Exclude – Document Types) Full title search (FacDev+outcomes+HE) 0 0 Scoping search “Faculty development” OR “educator professional 1215 n/a in Web of development” OR “College teachers -- In-service Science- Topic training” OR “College teachers -- Training of” OR (saved as “college staff development” OR “college faculty Subject in professional development” OR “educational Zotero) development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” FacDev AND “learning outcomes assessment” OR 20 20 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 743 743 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Full text search (FacDev+outcomes+HE) 9 9 Modified Keyword: “Faculty development” OR “educator 363 scoping search professional development” OR “College teachers -- In- in Educator’s service training” OR “College teachers -- Training of” Reference OR “college staff development” OR “college faculty Complete- professional development” OR “educational Keywords (this development” OR “Scholarship of teaching and database does learning” OR “SoTL” AND Keyword: “learning not easily bulk outcomes assessment” OR “outcomes assessment” OR export citations “improved student learning outcomes” OR "learning so keeping improve*" OR "student improve*”ANDKeyword: searches small “higher education" OR college OR undergraduate OR was necessary) "community college" OR "two-year college" OR "two- year education" OR "four-year college" OR "four-year education" OR university Revise Search Filters: ⁃ Full Text, ⁃ Peer-Reviewed ⁃ Document Type: "Abstract" OR "Article" OR "Case study" OR "Report" ⁃ Date: 2013 - 2022 17 5 Total Total Database Search Strategy Relevant Results Results Modified Document Title: “Faculty development” OR “educator 6 0 scoping search professional development” OR “College teachers -- In- in Educator’s service training” OR “College teachers -- Training of” Reference OR “college staff development” OR “college faculty Complete- Title professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” AND Document Title: “learning outcomes assessment” OR “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*”AND Document Title: “higher education" OR college OR undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four- year education" OR university Filters: ⁃ Full Text ⁃ Peer-Reviewed ⁃ Document Type : "Abstract" Or "Article" Or "Case study" Or "Report" ⁃ Date: 2013 - 2022 Modified Entire Document: “Faculty development” OR “educator 5837 164 scoping search professional development” OR “College teachers -- In- in Educator’s service training” OR “College teachers -- Training of” Reference OR “college staff development” OR “college faculty Complete- Text professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” AND Entire Document: “learning outcomes assessment” OR “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*”AND Entire Document: “higher education" OR college OR undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters: ⁃ Full Text ⁃ Peer-Reviewed ⁃ Document Type : "Abstract" Or "Article" Or "Case study" Or "Report" ⁃ Date : 2013 - 2022 Scoping search “Faculty development” OR “educator professional 2165 574 in ProQuest development” OR “College teachers -- In-service Central- training” OR “College teachers -- Training of” OR Abstracts “college staff development” OR “college faculty 17 6 Total Total Database Search Strategy Relevant Results Results professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” = 2165 results Filters ⁃ SU higher education OR colleges & universities OR community colleges FacDev AND “learning outcomes assessment” OR 5 5 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 915 158 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters ⁃ SU (higher education OR colleges & universities OR community colleges) AND (teachers OR college faculty OR college students OR college campuses OR university faculty OR college professors OR academic staff OR staff development) NOT corporate culture ⁃ SU AND (learning OR teaching OR students OR professional development OR pedagogy OR curricula OR collaboration OR education OR research OR teaching methods OR innovations OR success OR classrooms OR studies OR workshops OR case studies OR knowledge OR skills OR qualitative research OR design OR instructional design OR student participation OR communities of practice OR quality of education OR literature reviews OR participation OR culture OR curriculum development OR quality assurance OR active learning Full search in abstracts = 2 results (exported to Zotero) 172 Tried database recommended search ab(SU.exact("PROFESSIONAL DEVELOPMENT") AND SU.exact("COLLEGE TEACHERS" OR "COLLEGE FACULTY")) Added SU higher education OR learning OR teaching OR community colleges OR colleges & universities OR college students OR students OR pedagogy OR teaching methods OR studies OR qualitative research OR curricula OR perceptions OR success OR skills OR college campuses OR curriculum development OR case 17 7 Total Total Database Search Strategy Relevant Results Results studies OR innovations OR interviews OR literature reviews OR academic achievement OR organizational change OR questionnaires OR reflective teaching OR teacher attitudes Scoping search “Faculty development” OR “educator professional 644 199 in ProQuest development” OR “College teachers -- In-service Central- Title training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” Filters ⁃ SU higher education OR colleges & universities OR college campuses OR community colleges FacDev AND “learning outcomes assessment” OR 20 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” = 0 results ⁃ Modified “outcomes” search string to “learning outcomes” ⁃ Modified “outcomes” search string to “assessment” FacDev AND “higher education" OR college OR 51 51 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Full search in title = modified “outcomes” search string 1 1 to only “learning outcomes” OR “assessment” Scoping search “Faculty development” OR “educator professional 732 164 in ProQuest development” OR “College teachers -- In-service Central- Text training” OR “College teachers -- Training of” OR “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” Filters ⁃ SU (higher education OR colleges & universities OR college campuses OR college students) NOT corporate culture FacDev AND “learning outcomes assessment” OR 610 202 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” Filters 17 8 Total Total Database Search Strategy Relevant Results Results ⁃ SU higher education OR colleges & universities OR college campuses OR community colleges FacDev AND “higher education" OR college OR 25,395 304 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Filters ⁃ SU higher education= 2902 results ⁃ SU colleges & universities OR college campuses OR community colleges ⁃ SU (learning OR success OR innovations OR academic achievement OR quality OR quality of education OR accountability) ⁃ SU AND (college students OR professional development OR college faculty OR university students OR higher education institutions) Modified search to SU higher education AND (colleges & universities OR college campuses OR community colleges) AND (learning OR academic achievement OR quality of education OR accountability) Filters ⁃ SU AND (qualitative research OR questionnaires OR perceptions OR literature reviews OR student attitudes OR research methodology OR case studies OR research & development--r&d OR attitudes) Full text search (FacDev+outcomes+HE) 602 177 Filters ⁃ SU higher education OR colleges & universities OR college campuses OR community colleges = 203 results ⁃ SU AND (learning OR teaching OR pedagogy OR teaching methods OR college students OR professional development OR success OR accountability OR accreditation OR educational evaluation OR knowledge OR academic achievement OR innovations OR quality of education OR student attitudes OR student participation OR college faculty OR curriculum development OR learning outcomes OR quality OR university students OR educational objectives OR instructional design) Scoping search “Faculty development” OR “educator professional 15 15 in ProQuest development” OR “College teachers -- In-service training” OR “College teachers -- Training of” OR 17 9 Total Total Database Search Strategy Relevant Results Results Central- “college staff development” OR “college faculty Subject professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” FacDev AND “learning outcomes assessment” OR 2 2 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” (Modified to “learning outcomes” OR “assessment”) FacDev AND “higher education" OR college OR 6 6 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Full search in subject (modified “outcomes” search 1 1 string) su("Faculty development" OR "educator professional development" OR "College teachers In-service training" OR "College teachers Training of" OR "college staff development" OR "college faculty professional development" OR "educational development" OR "Scholarship of teaching and learning" OR "SoTL" OR "professional continuing education" OR "staff development") AND su("higher education" OR college OR undergraduate OR "community college" OR "two- year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university) AND su("outcomes of learning" OR "assessment outcomes" OR "outcomes" OR "assessment" OR "assessment of learning outcomes") Scoping search “Faculty development” OR “educator professional 0 in Sociological development” OR “College teachers -- In-service Abstracts- training” OR “College teachers -- Training of” OR Abstracts “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” FacDev AND “learning outcomes assessment” OR 0 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 0 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year 18 0 Total Total Database Search Strategy Relevant Results Results college" OR "four-year education" OR university= 0 results Modified to “higher education”= 2 results, none relevant Tried database recommended search 37 0 ((SU.exact("HIGHER EDUCATION") OR SU.exact("HIGHER EDUCATION 04564")) AND SU.exact("PROFESSIONAL DEVELOPMENT”)) Scoping search Tried multiple combinations of search terms, included in Sociological database recommended terms and found no results Abstracts- Title Scoping search “Faculty development” OR “educator professional 15 0 in Sociological development” OR “College teachers -- In-service Abstracts- training” OR “College teachers -- Training of” OR Subject “college staff development” OR “college faculty professional development” OR “educational development” OR “Scholarship of teaching and learning” OR “SoTL” OR “professional continuing education” OR “staff development” FacDev AND “learning outcomes assessment” OR 1 0 “outcomes assessment” OR “improved student learning outcomes” OR "learning improve*" OR "student improve*” FacDev AND “higher education" OR college OR 2 0 undergraduate OR "community college" OR "two-year college" OR "two-year education" OR "four-year college" OR "four-year education" OR university Full search in subject; no database recommended search 0 Google Scholar All words: improvement learning qualitative "faculty 256 Most were scoping search- development”, Exact phrase: united states, At least one duplicates -anywhere of the words: college undergraduate "community of results college" "two year college" university from Without the words: elementary secondary quantitative previous doctoral healthcare nursing medical dissertation searches European RQ2 all searches =3677 total results 18 1 RQ2 Search Results by Tag Tag Results FacDev—text all databases 157 FacDev --subject all databases 324 FacDev +HE—abstract all databases 856 FacDev +HE--title all databases 339 FacDev +HE—text all databases 473 FacDev +HE--subject all databases 1158 FacDev +outcomes—abstract all databases 12 FacDev +outcomes—title all databases 21 FacDev +outcomes—text all databases 295 FacDev +outcomes—subject all databases 128 FacDev +outcomes+HE—abstract all databases 180 FacDev +outcomes+HE—title all databases 3 FacDev +outcomes+HE—text all databases 371 FacDev +outcomes+HE—subject all databases 134 Outcomes+HE--title all databases 12 18 2 APPENDIX C FORMAL LITERATURE SELECTION 18 3 Authors First or Do the findings (*indicates second- Study’s purpose connect to student inclusion in order learning? synthesis) constructs *Allen et al. Describe faculty development Yes second (2019) initiative’s impact on student learning *Baas et al. Explore the dual narrative of Indirectly—faculty second (2016) assessment in higher education. perceptions about CofA’s impact on learning but not specific examples; excluded from key themes Barrette & Develop and apply model of cultural No—mentioned in Paesani (2018) literacy to assessment documents to implications for future create model of discipline-specific research assessment. *Bickerstaff et Findings from lesson study faculty Yes Second al. (2021) development initiative and its impact on student learning. *Carter (2013) Describe approach to authentic Yes Second assessment of information literacy and findings related to student learning. Caudle & Explore the experiences of community No-- mentioned in Hammons college faculty with outcomes implications for future (2018) assessment. Focused on faculty research. experiences and perceptions of assessment and how to increase and encourage faculty participation in assessment activities but did not examine impacts on student learning. Colina (2021) Explore implementation of assessment No—focus is on why at three universities. Concluded that faculty participate and compliance mentality and accreditation impacts of assessment on made assessment meaningless and institutions superficial. *Culver & Explore faculty participation and Indirectly—faculty second Phipps (2018) perceptions of assessment at a HBCU. perceptions about CofA’s Findings indicate a learning orientation impact on learning but not in general but ambivalence toward specific examples; whether assessment is meaningful. excluded from key themes *Cydis et al. Propose model for implementing Yes Both (2015) essential learning outcomes (ELO) initiative. Pre- and post- surveys indicated student self-perception of competency. *Demeter et al. Described use of direct and indirect Yes Second (2019) measures assessing critical thinking and written communication outcomes and 18 4 Authors First or Do the findings (*indicates second- Study’s purpose connect to student inclusion in order learning? synthesis) constructs students’ standardized and self-reported learning results. Elliott (2014) Lit review of best practices and No—student learning as resources for faculty development. product of or purpose for FacDev not discussed Elliott & Explore relationship between faculty No—no direct correlations Oliver (2016) development and student learning at a to student learning community college. Emil & Cress Explore factors that impact faculty No (2014) participation in institutional assessment efforts. Recommendations to create a culture of assessment. Fuller et al. Development of an instrument to No (2016) empirically confirm how cultures of assessment are created. Garfolo & Model for conducting student learning No L’Huillier assessment and reporting findings for (2015) accreditation purposes. Goss (2022) Lit review of assessment related to No academic libraries. Applies literature related to higher ed assessment specifically to library context. Guetterman & Faculty inquiry of how organizational No—efficacy of FacDev Mitchell (2016) context relates to faculty engagement in mentioned but not tied to assessment. learning Hart & Discussed a faculty development No—interesting study on Robinson opportunity (assignment charrette program assessment but (2019) session) with the intention of not tied back to impact on improvement, but the impact of the learning. experience was not discussed or examined. Students presented their learning, but outcomes discussed in the findings were those of the charrette session, not student learning. Holzweiss et al. Content analysis of responses to No—learning as result of (2016) Administrators’ Survey of Assessment CofA is not discussed. Culture. Provided important viewpoint about how administrators tended to view assessment as primarily for compliance with accreditation. Hufford (2016) Analysis of course syllabi to identify No—data collected do not gaps that academic libraries can fill to measure impact on support learning and assessment. 18 5 Authors First or Do the findings (*indicates second- Study’s purpose connect to student inclusion in order learning? synthesis) constructs learning; identified for future research *Jankowski Report of NILOA’s survey of Directly and indirectly Both (2020) assessment-related changes during COVID-19. Jonson et al. Interrogates conceptions of “use” of No—no direct ties to (2014) assessment data. Proposes model student learning but emphasizing assessment as inquiry and perceptions of learning are expanding “use” across multiple referenced multiple times dimensions. Jonson et al. Analysis of connections between No—discusses need for (2017) faculty assessment knowledge and support & faculty beliefs and use of assessment data. development, but not impact on student learning Joyner (2016) Outlines curriculum mapping project No—did not discuss how conducted by faculty. Focus is on changes impacted student mapping and teaching faculty to learning develop outcomes but did not present changes to curricula or instruction. *Karabulut- Examined results of active learning Yes second Ilgu et al. faculty development program. (2021) Engineering faculty learned active learning strategies via campus workshops, implemented, faculty believed learning outcomes were improved. Katsampoxaki- Describes a faculty development No—international; no Hodgetts workshop on syllabus redesign. impact or discussion of (2022) Discussion of faculty perceptions and student learning. satisfaction regarding the workshop content. *Marrujo- Explores beliefs of community college Yes Both Duck (2017) faculty regarding effectiveness of assessment. Discusses faculty perceptions about teaching improvement as result of assessment. McMillan et al. Discusses interventions used in a No—used outcome data to (2020) nursing program to improve student improve teaching learning. (described) but did not tie back to student learning improvement, other than saying that it is “assumed” 18 6 Authors First or Do the findings (*indicates second- Study’s purpose connect to student inclusion in order learning? synthesis) constructs Meyer (2014) Thematic analysis of strategies used by No—does not address faculty to increase student engagement CofA or FacDev; and success in online classes. improvement in learning in online classes, but not tied to specific initiatives or supports Morse Focuses on why assessment happens No—no discussion of and roadblocks, learning Ndoye Focuses on what effective OA systems No—no specific tie to do, what makes them effective, learning Instructional improvement was one finding, but learning was only mentioned generally. *Pelletreau FLC focused on active learning Yes second pedagogies with multi-institutional participants, examines impact on student learning improvement Pham Described and evaluated a quality No—assessment quality process for institutional effectiveness. data gives specific improvement percentage (p. 128), but the study itself does not tie to student learning Schoepp Focuses on faculty perceptions of OA No—international, not tied to learning improvement Stevenson Climate survey and focus on improving No—not related to perception of OA learning improvement Stitt-Bergh Summarizes the results of 3 studies as No—is not a study, is “improvement story” highlighting conceptual/theoretical assessment for learning improvement Stitt-Bergh Opinion piece about importance/value No—is not a study (2019) of assessment *Tinnell Examines the impact of a faculty Yes both learning community focusing on incorporating active learning pedagogies in engineering education. Townsend Description of using a charrette model No—FacDev not tied back for assignment redesign and assessment to learning, results of of student learning. student learning not presented *Wheat Examines the impact of an active Yes Second learning environment and faculty development program on student learning. 18 7 Authors First or Do the findings (*indicates second- Study’s purpose connect to student inclusion in order learning? synthesis) constructs *Willett Describes a project to analyze the Yes second impact of faculty development on writing across the curriculum.