The chapter contributors challenge faculty developers to move toward evidence-based practice in development interventions and discuss related needs for a theoretical foundation for faculty development activities, development of organizational cultures of learning, and methodologies for assessing learning outcomes of faculty who participate in faculty development.
In this review of 138 studies on the impact of educational development practices, we discuss the idea of impact and summarize previous studies of this topic. We then present an overview of findings on impact from current studies of typical educational development activities: workshops, formal courses, communities of practice, consultation, mentoring, and awards and grants programs. We conclude that although the studies vary in quality, the sheer volume of results offers guidance for development practice.
Although most educational development professionals value the importance of monitoring their programme's impact, systematic evaluation is not common, and often relies on inference measures such as extent of participation and satisfaction. This paper discusses approaches to programme impact evaluation in terms of six possible points of focus: (1) participants' perceptions/satisfaction; (2) participants' beliefs about teaching and learning; (3) participants' teaching performance; (4) students' perceptions of staff's teaching performance; (5) students' learning; and (6) effects on the culture of the institution. Whatever focus is selected it is important to address the following questions: (1) What is the intended impact? (2) Why evaluate? (3) When to evaluate? (4) Who evaluates? (5) How to evaluate? (6) Is the actual impact the same as the intended impact and is the actual impact desirable? (7) Who should receive the results of the evaluation? (8) What will happen as a consequence? Based on these two sets of questions, a 6 x 8 matrix is proposed to guide the evaluation of educational development initiatives. It is argued that the approach to impact evaluation needs to be aligned with the focus of the desired change as well as the intervention strategies used to bring about such change.
Assessment is a cyclical process within which educators construct outcomes, implement programs, assess constructs such as learning, evaluate results, and utilize results to craft stronger programs and services. Within educational and faculty development, assessment measures program impact on faculty, students, and/or institutional culture. Additionally, assessment activities support the scholarly dissemination of evidence-based and high-impact practices. Unfortunately, many centers of teaching and learning struggle to implement assessment practices that go beyond satisfaction-based program evaluations. This struggle can be attributable to multiple factors, including weak assessment infrastructure, shortsighted assessment goals, ill-conceived frameworks, and limited resources to do more than collect satisfaction data. We present an assessment framework that addresses these limitations by organizing center assessment efforts around faculty learning outcomes (FLOs). This framework focuses on the impact center programming has on faculty while also providing feedback on program quality and important information about institutional culture. More importantly, the multi-tiered FLO framework allows centers to systematically collect multiple data sources for each FLO. Comprehensive analysis of these FLO-driven data sources gives centers a new and more robust tool for understanding center effectiveness and evidencing the value of faculty development in higher education to diverse stakeholders.
This paper provides a program evaluation model, along with field-testing results, that was developed in response to the need for an evaluation model able to support systematic evaluation of teaching and learning centers (CTLs). The model builds upon the author’s previous studies investigating the evaluation practices and struggles experienced at 53 CTLs. Findings from these studies attribute evaluation struggles to contextual issues involving evaluation capacity, ill-structured curricula, and ill-conceived evaluation frameworks. This field-tested Four-Phase Program Evaluation Model addresses these issues by approaching evaluation in a comprehensive manner that includes an evaluation capacity analysis, curricular conceptualization, evaluation planning, and plan implementation.
Rutgers Office of Teaching Evaluation and Assessment Research
This document provides a detailed description of best practices in program assessment.
Lindsay Wheeler
This document provides a detailed description of best practices in program assessment. While tailored to departmental program assessment, the practices translate to educational development and are helpful for those wanting to learn more about program assessment.
Faculty development is a phenomenon that emerged in the early 1970s, yet rarely was there interest in evaluating the effectiveness of this effort—until now.
Lindsay Wheeler
This brief blog post overviews the difference between assessment and evaluation and provides a description of one approach to evaluating program outcomes.
Faculty developers across the nation are working on developing methods to evaluate their services. In 2010, the 35th Annual Professional Organizational and Development Network Conference identified assessing the impact of faculty development as a key priority. It was this growing demand that spawned my interest in conducting a 2007 statewide and a 2010 nationwide investigation of faculty development evaluation practices in the U.S. This article will describe how to develop a customized evaluation plan based on your program’s structure, purpose, and desired results, based on contemporary practices discovered through this research.
Understanding Impact of Educational Development Interventions
International Journal for Academic Development
This award-winning article describes a robust and extensive assessment of a suite of center programs.
Lindsay Wheeler
This award-winning article describes a robust and extensive assessment of a suite of center programs. Awarded the International Journal for Academic Development 2021 Article of the Year, below, the judges provide a description of how the study contributes to educational development research & assessment practices.
This study explored three US educational development (ED) programs: a weeklong course design institute, a new faculty learning community (NFLC), and a STEM learning community (STEM-LC). We compared observed instruction and student achievement for 239 STEM undergraduate courses taught by instructors who had or had not engaged in ED. Courses taught by NFLC and STEM-LC instructors had significantly more learning-focused syllabi and active learning than courses taught by non-engaged instructors, controlling for class size and type. We conclude that instructors need support in implementing active learning to ensure all students benefit. Additional research is needed to explore ED and active learning.
JUDGES' CITATION
This paper bravely ventures into the difficult territory of seeking quantifiable data on academic development interventions—in other words, the kinds of studies that many university leaders demand to see in order to accept that developers' work is genuinely valuable. The paper has a great deal to offer academic development practice and offers a way forward for practitioners to develop rigorous evaluations of the initiatives we champion, which can be used to justify those initiatives and garner institutional support—especially important given the performance regimes that are increasingly becoming prevalent across the world. Using a range of statistical measures, the authors provide insights that will prove helpful to the academic development community, both in terms of findings and methods, and present a robust, comprehensive, and convincing study of the effects of academic development interventions on instructor practices and student learning. It is particularly pleasing to read that the work we as academic developers do can have positive outcomes for Underrepresented Minority students. It's been hard to measure this, and this article shows how we can; we should all be doing more of this kind of research. The paper offers developers a template for how future studies might be conducted, as well as some of the inherent difficulties involved in quantifying academic development interventions.