Showing posts with label evaluation approach. Show all posts
Showing posts with label evaluation approach. Show all posts

Tuesday, October 23, 2012

Theory-based evaluation


Nesman, Batsche & Hernandez (2007)
Theory-based evaluation of a comprehensive Latino education initiative: An interactive evaluation approach

This paper describes a 5 year initiative to develop, implement and evaluate program(s) that would increase Latino student access to Higher Education. Theory of change and logic models were used to guide the program as these have been previously shown to be most effective when trying to create social change within  comprehensive community initiatives. 'Logic models can also serve as a roadmap for implementers to move from ideas to action by putting components together into a visual framework (Hernandez & Hodges, 2003).' (p.268)
A conceptual model was developed which incorporated context, guiding principles, implementation strategies, outcomes and  evaluation and resulted in a vision statement for the program. The paper also describes the interventions which were to be implemented, and goes on to describe the evaluation approach in more detail. They us an embedded case-study design (Yin, 1984) and mixed methods with a developmental approach which allowed for adaptation over time as the project moved through the varying stages of completion. Key questions were developed associated with each goal from the funding agency ie Process, Impact and Sustainability. One of the key findings under process was that the initial plan 'had been overly ambitious and that it would not be possible to accomplish this large number of interventions with the available resources.' (p.272). This resulting in a paring back of outcomes with some initiatives being prioritised and some being dropped altogether. A finding under Impact was that 'although it would be several years before long term outcomes could be effectively measured, the evaluators developed a tracking system to monitor changes in student outcomes each year.' (p.274). With sustainability, it was felt that strategies within institutions were more likely to be sustained than those relying on collaboration and cross-institutional coordination, unless there was ongoing external support. (p.279)
The authors also wrote about lessons learned from this approach. If theory-based evaluation is to be maximised, it does require  training of program participants on logic model development and theory of change approaches early in the process of implementation. This training can lead to the development of interactive and productive relationships between evaluators and implementers. Adopting a developmental approach was also highly beneficial in this project.

Participatory Evaluation


Lawrenz, (2003). How Can Multi-Site Evaluations be Participatory?

This article takes a look at 5 NSF funded multi-site programs, and asks the question whether they can be considered truly participatory since participatory evaluation requires stakeholder groups to have meaningful input in all phases including evaluation design, defining outcomes and selecting interventions. The authors highlight the fact though that these projects were funded through a competitive process and selection of successful projects was not based on their facilitation of successful (central) program evaluation. The programs investigated were: Local Systemic Change (LSC), Collaboratives for Excellence in Teacher Preparation Program (CETP), the Centers for Learning and Teaching (CLT), and Advanced Technological Education (ATE) and the Mathematics and Science Partnerships (MSP). 
Criteria used to evaluate whether these programs were particiatory in their evaluation practices drew on two frameworks, Cousins and Whitmore's three-dimensional formulizations of collaborative inquiry (1998) and Bourke's participatory evaluation spiral design using 8 key decision points (1998). Four types of decision making questions were used to compare the degree by which each of the individual projects were involved with the program evaluation. These were
1) the type of evaluation information collected, such as defining questions and instruments; 
(2) whether or not to participate; 
(3) what data to provide; and 
(4) how to use the evaluation information.

Findings showed that the programs were spread across a continium from no participation to full participation. So the authors next asked 'in what ways can participation contribute to the overall quality of the evaluation' (p.476). They suggest four specific dimensionsof quality evaluation: 
1) objectivity, 
(2) design of the evaluation effort, 
(3) relationship to site goals and context and 
(4) motivation to provide data.
 and go on to discuss these in relation to the literature. Finally they propose a model for participatory multi-site evaluations which they name a 'negotiated evaluation approach'. The approach consists of three stages, creating the local evaluations (each project), creating the central evaluation team and negotiation and collaboration on the participatory multi-site evaluation.   This enables the  evaluation plan to evolve out of the investigations at the sites and results in instruments and processes which are grounded in reality of the program as it is implemented. 



Sunday, October 21, 2012

The Multiattribute Utility (MAU) approach


Stoner, Meadan, Angell and Daczewitz (2012)
Evaluation of the Parent-implemented Communication Strategies (PiCS) project using the Multiattribute Utility (MAU) approach



The Multiattribute Utility (MAU) approach was used to evaluate a project federally funded by the Institute of Education Sciences. The purpose of the evaluation was a formative one, measuring the extent to which the first two (of 3) goals of the project were being met and was completed after the 2nd year of the project. The project goals were:
(a) develop a home-based naturalistic and visual strategies intervention program that parents can personalize and implement to improve the social-pragmatic communication skills of their young children with disabilities;
(b) evaluate the feasibility, effectiveness, and social validity of the program; and
(c) disseminate a multimedia instructional program, including prototypes of all materials and methods that diverse parents can implement in their home settings.
MAU was chosen as an approach because it was participant oriented, allowing the parents representatives to have a voice in the evaluation. There are 7 steps for a MAU evaluation and each is discussed in the paper.
1.     Identify the purpose of the project
2.     Identify relevant stakeholders (these individuals will help make decisions about the goals and attributes and their importance)
3.     Identify appropriate criteria to measure each goal and attribute
4.     Assign importance weights to the goals and attributes
5.     Assign utility-weighted values to the measurement scales of each attribute
6.     Collect measurable data on each attribute being measured
7.     Perform the technical analysis
An important item to note under item 3 was that it is important to identify essential attributes within each goal area, not to identify a set number of attributes. For this project, 28 attributes were defined by the stakeholders and 25 were actually found to be met through the evaluation.
For this project the MAU approach was found to be in keeping with one of the core values of the project, that of stakeholder involvement. Four primary benefits of using this approach were identified and one concern. The MAU
(a) was based on the core values of the PiCS project;
(b) engaged all stakeholders, including parents, in developing the evaluation framework;
(c) provided a certain degree of objectivity and transparency; and
(d) was comprehensive.
The primary concern was the length of time and labour required to conduct the evaluation. For this reason the authors believe it may not be applicable for evaluating smaller projects. 

Saturday, October 20, 2012

Archipelago approach


Lawrenz & Huffman, (2002)
The Archipelago Approach To Mixed Method Evaluation

This approach likens the different data collection methods to groups of islands; all interconnected ‘underwater’ by the underlying ‘truth’ of the program. This approach has its advantages since it is often difficult to uncover the complete ‘truth’ so using a combination of data types and analysis procedures can facilitate it. The authors quote Green & Caracelli (1997) and their three stances to mixing paradigms, the purist, pragmatic and dialectical stances then attempt to map their archipelago approach to each of these three stances. Thereby opposing Green and Caracelli’s view that the stances are distinct, in fact the authors believe the their metaphor allows for simultaneous consideration and thus provides a framework for integrating designs.

A nationally funded project is evaluated using the archipelago approach to highlight its benefits. Science teachers in 13 high schools across the nation were recruited and consideration was made to the level of mixing the methods in an area that traditionally used a more ‘logical-positivist’ research approach. So three different approaches were used:
1.     Quasi-experimental design – both quantitative and qualitative assessments of achievement. About half of the evaluation effort in terms of time and money were spent on this approach. This was pragmatic as it was included to meet the needs of stakeholders.
2.     A social interactionism approach – gathered data through site visits to schools and classrooms and observations made through open-ended field notes and this data produced narratives descriptions of each site. About one third of the evaluation effort focused on this approach.
3.     A phenomenological study of six of the teachers during implementation of the new curriculum via in-depth interviews.
The archipelago approach extends the idea of triangulation, which is linear to take into account the complex, unequally weighted and multi-dimensional manner. When considering the underlying truth about the effectiveness of the program, achievement was viewed as likely to be the strongest indicator and therefore most effort went into this approach. The learning environment was considered the next strongest indicator and the teacher’s experience as the least.
‘This approach created a way for the authors to preserve some unique aspects of each school while at the same time considering that the schools were linked in some fundamental way’. (p.337)It is hoped that this approach can lead evaluators to think less in either/or ways about mixing methods and more in complex integrative ways.