Implementation Research: The Study of Processes and Strategies

Table of Content

Introduction

Proctor et al. define implementation research as, “the study of processes and strategies that integrate evidence-based effective treatment into routine use”. Driven by their concern with complexity and the process by which outcomes are reached, policy implementation scholarship is, most often, qualitative, with many implementation scholars utilizing case study methodology. However, qualitative studies of implementation are not the predominant source from which policymakers receive data on evidence-based programming. Instead, it is quantitative program and policy evaluations that actively inform policymakers. While complexities are embraced in qualitative implementation research, such scholarship has limited influence in a policymaking culture that values simplicity, which leads to an overreliance of data generated by quantitative scholarship. Though empirical studies can provide data on the effect and impact of reforms, they are limited by their choice of variables, may suffer from bias in their application of statistical methods, and often do not include concrete analysis of implementation processes by which outcomes are reached. Thus, relying solely on quantitative methods can further contribute to reductionism in policy design.

Researchers in education need not wholly shift their methodological approach; there is merit in both quantitative and qualitative data collection, however there is a need for scholars to apply more dynamic lenses to their studies. The future of policy design and implementation would benefit from statistical studies that incorporate studies of implementation processes within their evaluations of programs . While qualitative implementation studies of implementation should develop tools, testable conceptual frameworks, and endeavor in the hypothesis testing of theories and frameworks in real world settings

This essay could be plagiarized. Get your custom essay
“Dirty Pretty Things” Acts of Desperation: The State of Being Desperate
128 writers

ready to help you now

Get original paper

Without paying upfront

Background

Case studies of implementation across policy contexts have lead to knowledge regarding effective processes by which implementation should be undertaken, and how change can be sustained. Case studies in education policy have examined how the practices, behaviors, and culture and climate of schools impacted the way that federal reforms such as No Child Left Behind (NCLB), Curricular reforms, like Reading First, or Comprehensive School Reforms, such as Success for All were enacted. Qualitative implementation analyses like those explored in part A, have promise as they offer policymakers “a theory of social action grounded on the experiences…of those likely to be affected by a policy decision or thought to be a part of the problem”. In depth analysis provided by case study also provide quantitative analysis a specific set of testable constructs to be incorporated into statistical research. However, the preponderance of variables, inconsistent language and definitions of terms, are not conducive with the current policy design paradigm that is outcomes-focused and “evidence-based”.

Currently, the succinct manner in which quantitative researchers are able to communicate outcomes via their research findings, contributes to their widespread use by policymakers eager to employ “evidence-based” practices and programs within their policy designs. Overreliance on research methods that communicate outcomes, without also including the processes by which those outcomes are reached, pose significant challenges to education stakeholders as they attempt to replicate the successes of “evidence-based”.

Implementation processes, policy environment, and organizational context are all complex and interrelated factors that impact the results of policy innovations. In 1986, Malcolm Goggin asserted, “implementation behaviors are shaped by the decision making environment, the type of policy at stake, and the characteristics of both the implementing organizations and the people who manage the program” P.330. The preclusion of any of the aforementioned, provide reformers with incomplete understandings of what it takes to make and sustain change. Without holistic knowledge regarding interventions and implementation processes, policymakers divert educational funds and resources towards innovations that may prove unsuccessful in practice.

Case Studies: Limited?

Implementation studies’ focus on uptake of innovations in particular contexts, meaning they often rely on case study methodology. Proctor et al.,  argues that this is a detriment to the field because it provides “anecdotal evidence” from “highly controlled experiments”. Sandfort and Moulton  further this claim, in the research by stating, “Research designs do little to help figure out which parts of the intervention are causal or what factors drive positive results”. In their view, implementation research tends to focus on “what it takes to inspire or motivate actors”. In an effort to contend with their need to produce data that can be understood by policymakers and applied in policy design, scholars within implementation studies have called for the development of tools, a shared implementation language, and singular frameworks by which theories of implementation can be tested and the use of my more “hypothesis driven statistical approaches”. Proponents of such, assert that implementation research that is not seeking to develop mechanisms and tools by which effective implementation can be reached is not conducive with broadening the understanding within the field  and, in turn, will not lead to improved policy design or outcomes. Without this methodological and analytical shift, policymakers will be left relying on quantitative measures to design future policies and implementation plans.

Show me the Evidence

Ron Haskins and Greg Margolis contend in, “Show me the Evidence: Obama’s Fight for Rigor and Results in Social Policy” that nine out of ten social policies fail. Haskins & Margolis, argue that the widespread failure of social policy has given rise to the current evidence-based movement that evaluates programs for their impact in order to more effectively allocate scarce resources. Under this “evidence-based” regime, particular types of research and data are privileged, and evidence of programmatic success becomes the primary criterion that determines whether programs are replicated or defunded. Proctor et al argued, however that there is not a lack of evidence-based programs, but rather evidence-based implementation. Sandfort and Moultonclaim that research-based interventions can only be beneficial if there are studies that examine implementation processes. Moreover, Blasé et al. write, “data is necessary to produce change, it cannot prompt the adoption of change or create or sustain change in practice in schools or classrooms”. In short, statistical data is limited by their ability to generate and test variables. Further, though empirical findings of studies are often viewed as less susceptible to bias, scholars have elucidated how the analytic assumptions applied to statistical modeling can undermine the validity of findings. Finally, quantitative evaluations of policies that do not included a concrete analysis of implementation process, pose challenges for future attempts at implementation and replication across contexts. Proctor et al., (2009) assert that the culture of an organization “may wield the greatest influence on acceptance of empirically supported treatments and willingness and capacity of provider organizations to implement treatments. Thus, without an understanding of how change processes began and were enacted, the characteristics of an organization, or the human behavior that must be amended in order for change to take hold in a school, the true ability to replicate effectively is limited.

Randomized Control Trials

Randomized controlled trials are heralded as a premier means to provide scientific evidence about policies and educational interventions due to their experimental design. Fixsen et al., defined evidence-based as “two or more randomized group designs, professionally done by two or more groups of investigators that examine outcomes of a program. In education, the most statistically rigorous of research studies can hope to be featured on the “WhatWorksClearinghouse” an outgrowth of the Institute of Education Sciences. In contentious political climates, many reformers and policymakers are concerned with identifying innovations that work and subsequently taking them to scale. This leads many policymakers to rely on quantitative measures that apply “simplicity and precision” to find answers to complex problems.

The value of quantitative methods, in addition to their ability to provide causational or relational data regarding policy outcomes, is that statistical methods allow researches to perform subgroup analysis to better understand how targeted populations are impacted by policies. The knowledge of the differential effects of reforms and schooling practices on underserved populations has been a powerful use of statistical analysis. However, it is imperative that experimental studies are applicable to real world settings.

Limitation of Variables

What is considered in policy analysis and evaluative studies is crucial to policy design and implementation. The limitation of research that focuses on outcomes, is that while they may acknowledge the complexities that schools face, that the political and local implementation environment and processes in which reforms are enacted are often considered tangentially, rather than as ancillary. While statistical analysis can provide information regarding the measurable impact that a program is able to have on a particular set of outcome variables, they are bound by the variables included in their scales and survey instruments. Therefore, though quantitative studies may be able to say that a program was (in)/effective in impacting a particular measure, they are not necessarily able to say why or how. Though outcome-centered studies include demographic, socio-economic, and other measurable factors, they are often missing the interpersonal school-based contextual factors that directly impact implementation behaviors and processes and the subsequent success of programs. Selection bias inherent in some school reforms poses a challenge to the ability to replicate programmatic outcomes. In an applied field such as education, our research methods must reflect the complexities of the environments we study. Education is political , politics are racialized, schools as social institutions are imbued in the racial and political nature of education (Gillborn, 2005). An example of how a lack of broader socio-political analysis in policy evaluation can be seen in evaluations of charter school policies.

The efficacy of Charter schools and voucher programs abound in education policy research However, Frankberg and colleagues  called the charter school movement a failure of civil rights due to segregation and lack of increase in student outcomes. Chapman and Donner assert that many market reforms assert that providing choice to students of color will increase their educational opportunity by impeding and interrupting the monopolization of public education created by traditional public schools. The authors argue, however, that market ideology that does not contend with white privilege and white racism causes policies to be “entrench[ed] in racial inequality”, without attending the social and societal implications of such market-based policies does a disservice to students.

Limitations of methodological assumptions

Hanushek asserts that the statistical methods that researchers employ and the assumptions that inform their methods are crucial to understanding the results produced in research. Borman et al (2006) illustrated this in their analysis of Quasi- experimental studies of Comprehensive School Reforms, such as Success for All. The authors found that effects reported in studies of Success for All are undermined if they did not account for the particular programmatic requirements that necessitates that 80% of a school’s faculty must agree to adopt the reform in their statistical analysis. This inclusion in statistical analysis is crucial, as this programmatic requirement, naturally combats one of the crucial aspects of implementation, willingness to change. Employing a different statistical method and sample selection that included schools that evidence “good and bad” implementation results, the authors generated smaller effect sizes than researchers who utilized a randomized design.

Implications for Policy Design and Implementation

In 1992, Miles and Fullan declared, “Education reform can’t be achieved if leaders and participants don’t internalize and habitually act on basic knowledge of how change takes place”. This proclamation is echoed by Bryk and Colleagues who argue, “Most education reforms reflect at best a partial understanding of system dynamics, and some seem almost oblivious to the fundamental character of the phenomena they seek to change”. The lack of attention to understand change is perpetuated by an overreliance on quantitative measures. While evidence is crucial to policy design, we must also understand what particular forms of data can and cannot tell us about. Without a dynamic approach to the study of policy implementation, policymakers will continue designing and implementing programs with limited success. The failure and abandonment of policy often leads to policy churn. The consequences of policy churn, are not simply monetary or politically frustrating, the constant turnover of policy works causes frustration of stakeholders and actually undermines future change efforts within schools.

Method and Analysis for the future

Spencer and Richi, in their article “Qualitative Analysis for Applied Policy Research” provide a framework that could prove beneficial to qualitative case study scholars. Qualitative data is described as a mechanism by which greater understanding can be reached, and can directly inform future quantitative analysis, by elucidating upon and corroborating findings gathered via quantitative methods. The work of qualitative analysis, in their view is “detection and the tasks of defining, categorizing, theorizing, explaining, exploring, and mapping”. It is within these qualitative tasks, that Spencer and Richie  outline their “Framework” strategy for applied research. Their framework is founded upon the words and accounts of participants, flexible and easily adjusted throughout analysis, and allows for easy access to the raw qualitative material. The promise of this analysis systematic approach to data analysis is that it allows the “analytic process, and the interpretations derived from it,  be viewed and judged by people other than the primary analyst”.

In an effort to produce scholarship that can provide a holistic understanding of policy outcomes, both qualitative and quantitative researchers should seek to incorporate a singular scale that can measure` organizational contexts that have been found to have an impact on implementation, uptake, and sustainability of reforms. The Quality Implementation Framework put forth by Meyers et al along with Normalization Process Theory  both represent implementation frameworks and conceptualizations that can be adapted to education contexts, in addition to scales that are specific to the educational contexts and the policymaking process writ large.

If quantitative methodologists, employed a reliable quantitative instrument that measured such contextual factors, in the same way that demographic variables such as race, SES, and gender are collected, they would be able to contribute a wealth of information regarding not just programmatic outcomes, but also the context in which programs were implemented, which can lead to greater research on implementation process embedded in scholarship.

Conclusion:

Policy implementation scholars are often able to capture the complexities of implementing environments, processes, and behaviors of implementing actors in their scholarship. This data has the potential to greatly influence policy design and implementing practices of school organizations. However, the complexities of the findings and the conclusions reach, are not often conducive to the evidence-based, outcomes focused culture of policymaking, which leads to an overreliance of econometric and other quantitative policy evaluative methods, which are limited by their variables, assumptions, and often do not include data regarding implementation processes. Without adequate knowledge of the complexities of environments and processes, the ability to replicate programmatic outcomes is threatened. As such there is a need for qualitative researchers to employ analytical and methodological approaches to their research to be better applicable for policy. Additionally, quantitative researchers should endeavor to include data regarding implementation process within their studies of efficacy of particular educational interventions.

Cite this page

Implementation Research: The Study of Processes and Strategies. (2022, Jul 06). Retrieved from

https://graduateway.com/case-studies/

Remember! This essay was written by a student

You can get a custom paper by one of our expert writers

Order custom paper Without paying upfront