Saturday, October 27, 2012

Writing

I'm currently writing up my lit review, which is taking longer than I expected. Initially I had planned to conduct a review of the literature as i progressed through the PhD. However we have discussed wring this up as a paper defining why a study of the nature I proposed for my PhD is needed. ie a rational if you like.
One of my issues with this is that there will be more literature as I go and so this can't be one defining document. Marina suggested I publish the table in which I'm synthesising findings in the papers, on a public website so it can be constantly updated. I like this idea.
On a writing retreat this week I showed my writing group the efforts so far and I have to say I was a bit disappointed with the feedback at first. The general opinion was that I shouldn't try and publish this lit review as most journals would not be interested in it as after all who am I to tell the world..? They suggested I keep writing it up but that it becomes a chapter in the Thesis rather than a journal article. That's actually fine and I'll discuss this with my supervisors next week.

So where I am at right now is trying to weave the themes that have come from my searching, with the relevant items from the articles and then craft a suitable reflection/discussion and conclusion.
Themes I'm working with are:

Formative vs. summative evaluation
Building capacity in evaluation skills
Non-completion of project elements (such as evaluation, reports and the project itself)
Non-usage of evaluation findings
Importance of stakeholder involvement
Inaccurate initial expectations (could be linked to #2)
Importance of planning and defining clear evaluation criteria
Evaluation approaches, models or frameworks



Tuesday, October 23, 2012

Theory-based evaluation


Nesman, Batsche & Hernandez (2007)
Theory-based evaluation of a comprehensive Latino education initiative: An interactive evaluation approach

This paper describes a 5 year initiative to develop, implement and evaluate program(s) that would increase Latino student access to Higher Education. Theory of change and logic models were used to guide the program as these have been previously shown to be most effective when trying to create social change within  comprehensive community initiatives. 'Logic models can also serve as a roadmap for implementers to move from ideas to action by putting components together into a visual framework (Hernandez & Hodges, 2003).' (p.268)
A conceptual model was developed which incorporated context, guiding principles, implementation strategies, outcomes and  evaluation and resulted in a vision statement for the program. The paper also describes the interventions which were to be implemented, and goes on to describe the evaluation approach in more detail. They us an embedded case-study design (Yin, 1984) and mixed methods with a developmental approach which allowed for adaptation over time as the project moved through the varying stages of completion. Key questions were developed associated with each goal from the funding agency ie Process, Impact and Sustainability. One of the key findings under process was that the initial plan 'had been overly ambitious and that it would not be possible to accomplish this large number of interventions with the available resources.' (p.272). This resulting in a paring back of outcomes with some initiatives being prioritised and some being dropped altogether. A finding under Impact was that 'although it would be several years before long term outcomes could be effectively measured, the evaluators developed a tracking system to monitor changes in student outcomes each year.' (p.274). With sustainability, it was felt that strategies within institutions were more likely to be sustained than those relying on collaboration and cross-institutional coordination, unless there was ongoing external support. (p.279)
The authors also wrote about lessons learned from this approach. If theory-based evaluation is to be maximised, it does require  training of program participants on logic model development and theory of change approaches early in the process of implementation. This training can lead to the development of interactive and productive relationships between evaluators and implementers. Adopting a developmental approach was also highly beneficial in this project.

Participatory Evaluation


Lawrenz, (2003). How Can Multi-Site Evaluations be Participatory?

This article takes a look at 5 NSF funded multi-site programs, and asks the question whether they can be considered truly participatory since participatory evaluation requires stakeholder groups to have meaningful input in all phases including evaluation design, defining outcomes and selecting interventions. The authors highlight the fact though that these projects were funded through a competitive process and selection of successful projects was not based on their facilitation of successful (central) program evaluation. The programs investigated were: Local Systemic Change (LSC), Collaboratives for Excellence in Teacher Preparation Program (CETP), the Centers for Learning and Teaching (CLT), and Advanced Technological Education (ATE) and the Mathematics and Science Partnerships (MSP). 
Criteria used to evaluate whether these programs were particiatory in their evaluation practices drew on two frameworks, Cousins and Whitmore's three-dimensional formulizations of collaborative inquiry (1998) and Bourke's participatory evaluation spiral design using 8 key decision points (1998). Four types of decision making questions were used to compare the degree by which each of the individual projects were involved with the program evaluation. These were
1) the type of evaluation information collected, such as defining questions and instruments; 
(2) whether or not to participate; 
(3) what data to provide; and 
(4) how to use the evaluation information.

Findings showed that the programs were spread across a continium from no participation to full participation. So the authors next asked 'in what ways can participation contribute to the overall quality of the evaluation' (p.476). They suggest four specific dimensionsof quality evaluation: 
1) objectivity, 
(2) design of the evaluation effort, 
(3) relationship to site goals and context and 
(4) motivation to provide data.
 and go on to discuss these in relation to the literature. Finally they propose a model for participatory multi-site evaluations which they name a 'negotiated evaluation approach'. The approach consists of three stages, creating the local evaluations (each project), creating the central evaluation team and negotiation and collaboration on the participatory multi-site evaluation.   This enables the  evaluation plan to evolve out of the investigations at the sites and results in instruments and processes which are grounded in reality of the program as it is implemented. 



Sunday, October 21, 2012

The Multiattribute Utility (MAU) approach


Stoner, Meadan, Angell and Daczewitz (2012)
Evaluation of the Parent-implemented Communication Strategies (PiCS) project using the Multiattribute Utility (MAU) approach



The Multiattribute Utility (MAU) approach was used to evaluate a project federally funded by the Institute of Education Sciences. The purpose of the evaluation was a formative one, measuring the extent to which the first two (of 3) goals of the project were being met and was completed after the 2nd year of the project. The project goals were:
(a) develop a home-based naturalistic and visual strategies intervention program that parents can personalize and implement to improve the social-pragmatic communication skills of their young children with disabilities;
(b) evaluate the feasibility, effectiveness, and social validity of the program; and
(c) disseminate a multimedia instructional program, including prototypes of all materials and methods that diverse parents can implement in their home settings.
MAU was chosen as an approach because it was participant oriented, allowing the parents representatives to have a voice in the evaluation. There are 7 steps for a MAU evaluation and each is discussed in the paper.
1.     Identify the purpose of the project
2.     Identify relevant stakeholders (these individuals will help make decisions about the goals and attributes and their importance)
3.     Identify appropriate criteria to measure each goal and attribute
4.     Assign importance weights to the goals and attributes
5.     Assign utility-weighted values to the measurement scales of each attribute
6.     Collect measurable data on each attribute being measured
7.     Perform the technical analysis
An important item to note under item 3 was that it is important to identify essential attributes within each goal area, not to identify a set number of attributes. For this project, 28 attributes were defined by the stakeholders and 25 were actually found to be met through the evaluation.
For this project the MAU approach was found to be in keeping with one of the core values of the project, that of stakeholder involvement. Four primary benefits of using this approach were identified and one concern. The MAU
(a) was based on the core values of the PiCS project;
(b) engaged all stakeholders, including parents, in developing the evaluation framework;
(c) provided a certain degree of objectivity and transparency; and
(d) was comprehensive.
The primary concern was the length of time and labour required to conduct the evaluation. For this reason the authors believe it may not be applicable for evaluating smaller projects. 

Saturday, October 20, 2012

Archipelago approach


Lawrenz & Huffman, (2002)
The Archipelago Approach To Mixed Method Evaluation

This approach likens the different data collection methods to groups of islands; all interconnected ‘underwater’ by the underlying ‘truth’ of the program. This approach has its advantages since it is often difficult to uncover the complete ‘truth’ so using a combination of data types and analysis procedures can facilitate it. The authors quote Green & Caracelli (1997) and their three stances to mixing paradigms, the purist, pragmatic and dialectical stances then attempt to map their archipelago approach to each of these three stances. Thereby opposing Green and Caracelli’s view that the stances are distinct, in fact the authors believe the their metaphor allows for simultaneous consideration and thus provides a framework for integrating designs.

A nationally funded project is evaluated using the archipelago approach to highlight its benefits. Science teachers in 13 high schools across the nation were recruited and consideration was made to the level of mixing the methods in an area that traditionally used a more ‘logical-positivist’ research approach. So three different approaches were used:
1.     Quasi-experimental design – both quantitative and qualitative assessments of achievement. About half of the evaluation effort in terms of time and money were spent on this approach. This was pragmatic as it was included to meet the needs of stakeholders.
2.     A social interactionism approach – gathered data through site visits to schools and classrooms and observations made through open-ended field notes and this data produced narratives descriptions of each site. About one third of the evaluation effort focused on this approach.
3.     A phenomenological study of six of the teachers during implementation of the new curriculum via in-depth interviews.
The archipelago approach extends the idea of triangulation, which is linear to take into account the complex, unequally weighted and multi-dimensional manner. When considering the underlying truth about the effectiveness of the program, achievement was viewed as likely to be the strongest indicator and therefore most effort went into this approach. The learning environment was considered the next strongest indicator and the teacher’s experience as the least.
‘This approach created a way for the authors to preserve some unique aspects of each school while at the same time considering that the schools were linked in some fundamental way’. (p.337)It is hoped that this approach can lead evaluators to think less in either/or ways about mixing methods and more in complex integrative ways.

Another Website?

At my supervision session this week, we have been talking about the possibility of creating a public website to publish the lit review. A website would become a useful resource, allow public comment and become the place where I could show my results particularly if i go on to develop an interactive evaluation instrument.

The question is, can this website act as a publication? I could then refer to the table (which is growing unwieldy for a word doc)  when I write the paper on the literature synthesis. Questions to ponder

Evaluation Capacity Building


Preskill, H., & Boyle, S. (2008). A Multidisciplinary Model of Evaluation Capacity Building. American Journal of Evaluation 29(4), 443–459.


Evaluation Capacity Building (ECB) has become a hot topic of conversation, activity and study in recent times. This paper offers a comprehensive model for designing and implementing ECB activities and processes. The model draws on the fields of evaluation, organisational learning and change and adult learning.
Trigers for ECB usually come from external demands (accountability, environmental change, policies and plans) or internal needs (organisational change, mandate from leadership, perceived lack of knowledge and skills, increased funding, perceived shortage of evaluators, desire to improve programs).
Assumptions are also required and may include: '(a) organization members can learn how to design and conduct evaluations, (b) making learning intentional enhances learning from and about evaluation, and (c) if organization members think evaluatively, their programs will be more effective' (p.446).
And expectations of any ECB effort may include:
  • Evaluations will occur more frequently 
  • Evaluation findings will be used more often for a variety of purposes (including program improvement, resource allocations, development of policies and procedures, current and future programming, and accountability demonstrations)
  • Funders will be more likely to provide new, continuing, and/or increased resources
  • The organization will be able to adapt to changing conditions more effectively
  • Leaders will be able to make more timely and effective decisions
  • The organization will increase its capacity for learning


The paper looks at each of the 10 teaching and learning strategies (from the inner left circle), but then goes on to stress the importance of the design and implementation of any initiative. With Design, the following are of importance: 
  • identifying ECB participants characteristics - need to assess the evaluation competence of potential participants
  • determining available organisational resources - including facilitation and time
  • relevant evaluation, learning and individual and organisational change theories
  • ECB objectives - cognitive, behavioural and affective
** some good references here on improving attitudes towards evaluation and reducing stress and anxiety around evaluation***

There is a section on transfer of learning and acknowledgement that dialogue, reflection and articulating clear expectations for what and how to transfer knowledge and skills are critical for longer term impacts of ECB (p453).

In terms of sustainable practice, the right circle of the model is described in more detail, with each of the 8 elements discussed. And finally the diffusion element is explored. The authors have used a water or reverberations to depict emanation from the organisation.

They conclude that for ECB to be transformational, efforts must be intentional, systematic and sustainable. (p. 457)




Friday, October 12, 2012

A Project Profile Approach to Evaluation

Accountability is a common driver for evaluation particularly as funding bodies strive to obtain measurable gains for their investments, in teacher content knowledge, change in practice and of course student learning. The authors of this study insist that individual project profiles are needed to take into account the unique contextual variables of a project whilst comparing projects across a funded program.

The context for this study was to look at professional development of teachers in the K-12 sector under the  Improving Teacher Quality State Grants Program. Each grant recipient is required to conduct some internal evaluation processes and also to be part of an external evaluation. This paper reports on the design of the latter. Goals were:

(1) how well projects attained their objectives; 
(2) the quality of the PD that was delivered, and 
(3) what outcomes were achieved for teachers and students.

Nine projects were investigated and a profile for each was constructed. The profile consisted of 6 sections: Project background; Project Design; participants and their schools; Quality of implementations; satisfaction survey; Outcomes and Recommendations. In other words do not comapare outcomes alone since the the teachers and the school settings can vary significantly across school sites and therefore outcomes alone do not tell the whole story.

Then the model of using project profiles was compared to a model for evaluating professional development programs (Guskey, 2000). Guskey's hierarchical model includes 6  levels, moving from the simple to the more complex: 
1. Participants' reactions
2. Participants' learning
3. Organization support and change
4. Participants' use of new knowledge and skills
5. Student learning outcomes


The authors mapped their model and found that they needed to modify Guskey's model to make it  more holistic. They created a central core with steps 1,2,4 and 5 each fed by step 3. And then an outer layer of content, context and process. (p.152)