Showing posts with label meta-evaluation. Show all posts
Showing posts with label meta-evaluation. Show all posts

Thursday, November 1, 2012

A Similar Study


Oliver, MacBean, Conole & Harvey, 2002
Using a toolkit to support the evaluation of learning

This research was prompted by rejecting the assumption that users have similar evaluation needs, which raises the problem that practitioners must be aware of the range of evaluation methods available to them, and should be able to select the approach that best addresses their needs (p. 207). This study was primarily concerned with evaluating ICTs and more specifically around the usability of a toolkit developed by JISC in the UK. The web-based toolkit was developed out of a realisation that the current evaluation instruments weren't quite hitting the mark. The new toolkit  came out of a combination of a structured design process (previous toolkit approaches) with rich descriptions of methods and supportive resources (such as the 'cookbook' approach).
The toolkit has a six step approach which can be thought of as a combination of contextual and mechanical or strategic and tactical, with the needs of the stakeholders driving the process.

  1. Identification of the audience for the evaluation
  2. Selection of an evaluation question
  3. Choice of an evaluation methodology 
  4. Choice of data collection methods 
  5. Choice of data analysis methods 
  6. Selection of the most appropriate format(s) for reporting the findings to the audience
The toolkit is organized in three sections, Evaluation planner, Evaluation advisor and Evaluation presenter. The planner section helps the user  define the scope of their evaluation and the output is an evaluation strategy and implementation guide. The Advisor section covers the empirical aspects of evaluation and the presenter section focuses on communicating findings to the stakeholders. Finally an evaluation plan can be printed out.

This study provided formative feedback on the toolkit and summative information about its impact  The latter is not done in this paper as time period did not allow. Observational studies were used which involved participants working through the toolkit in a 'talk-aloud' protocol with an expert on hand to provide support and guidance. Then at the end of the session, users provided further feedback in the form of a focus group. Participants in this stage were all novices to the field of evaluation.
After this first round, modifications were made to the toolkit and then a second round through the form of a workshop was carried out. Participants in this stage were from a range of backgrounds and experiences.
The study showed that working through the toolkit allowed users to design evaluation strategies tailored to their local needs, rather than simply falling back on familiar but inappropriate methods. Some limitations were noted in that the time to work through the toolkit is about 4.5 hrs which may make it unsuitable for smaller projects. The other item of note was that some users felt uncomfortable when presented with an evaluation approach which was uncomfortable to them. A rank ordering of options may be a better solution.

Friday, October 12, 2012

A Project Profile Approach to Evaluation

Accountability is a common driver for evaluation particularly as funding bodies strive to obtain measurable gains for their investments, in teacher content knowledge, change in practice and of course student learning. The authors of this study insist that individual project profiles are needed to take into account the unique contextual variables of a project whilst comparing projects across a funded program.

The context for this study was to look at professional development of teachers in the K-12 sector under the  Improving Teacher Quality State Grants Program. Each grant recipient is required to conduct some internal evaluation processes and also to be part of an external evaluation. This paper reports on the design of the latter. Goals were:

(1) how well projects attained their objectives; 
(2) the quality of the PD that was delivered, and 
(3) what outcomes were achieved for teachers and students.

Nine projects were investigated and a profile for each was constructed. The profile consisted of 6 sections: Project background; Project Design; participants and their schools; Quality of implementations; satisfaction survey; Outcomes and Recommendations. In other words do not comapare outcomes alone since the the teachers and the school settings can vary significantly across school sites and therefore outcomes alone do not tell the whole story.

Then the model of using project profiles was compared to a model for evaluating professional development programs (Guskey, 2000). Guskey's hierarchical model includes 6  levels, moving from the simple to the more complex: 
1. Participants' reactions
2. Participants' learning
3. Organization support and change
4. Participants' use of new knowledge and skills
5. Student learning outcomes


The authors mapped their model and found that they needed to modify Guskey's model to make it  more holistic. They created a central core with steps 1,2,4 and 5 each fed by step 3. And then an outer layer of content, context and process. (p.152)

Sunday, September 30, 2012

Meta-evaluation of methods for selecting projects


Brandon, Paul R. “A Meta-evaluation of Schools’ Methods for Selecting Site-managed Projects.” Studies In Educational Evaluation 24, no. 3 (1998): 213–228.

A meta-evaluation of 17 schools who apply for funding from a state-wide initiative. The authors were interested in finding out how the schools evaluated which projects were put forward for funding application.
There are three types of evaluative efforts required by schools, needs assessment; project evaluation (when searching for the one to best meet the student and school needs); and summative and formative evaluation after project implementation. This study investigates the second type ie school based evaluative efforts and activities used when selecting educational projects to address their identified needs.

The results showed that the extent to which teachers participated in making decisions about both the process and content of needs assessments was positively related to the validity of these decisions. (p.214)
Evaluation criteria came from using the CIPP approach to evaluation (Stufflebeam, 1983). The four criteria are (a) the extent to which all faculty and staff participated in selecting projects, (b) the extent to which school personnel used the appropriate sources of information, (c) the extent to which the schools compared their preferred projects with other available projects before making their final project selections, and (d) the extent to which the schools considered issues of feasibility such as project cost and ease of implementation. (p.216). A fifth criterion was used, based on the belief that projects are most likely to succeed when they are based on theories of education and have been shown to have succeeded elsewhere (Ellis & Fouts, 1993; Slavin, Karweit, & Madden, 1989). These five criteria then supported the five evaluation questions to be asked: (a) To what extent did school personnel participate in project selection? (b) To what extent did the schools collect information about possible projects from the appropriate types of sources? (c) To what extent did the schools compare their preferred projects with others before making final project selections? (d) To what extent did the schools consider project cost and ease of implementation when selecting projects? (e) To what extent were the selected projects based on theory and supported by empirical findings of previous studies? (p.218)

Two data collection methods were used, a self-report survey questionnaire (for first four questions) and a literature review (for fifth question). Findings: results were encouraging for two of the questions (b and c) and not so encouraging for the remaining three (a, d and e). Three categories of problems that the schools had encountered when implementing their projects were identified.
  • inaccurate estimates of project costs.
  • misjudging the managerial, administrative, or logistical requirements of the projects.
  • underestimating the level of staff, parent, or community understanding or~motivation required for successful project implementation.
[useful when writing about my findings from Phase 1]

And the final question highlighted that empirical evidence about project success was not found for about half of the schools. This could be explained because school personnel often know little about proper use of research findings and this should not be the case with HE project proposals.
And Selecting projects is a task that is added to schools' full schedules, and, typically, few faculty and staff are available or willing to participate. [same could be said of HE] However authors recommend that schools should be shown the advantages of allowing as many of their staff and faculty as feasible to participate in project selection. Assistance of this sort would improve the chance that the best projects would be identified to meet needs and would help ensure that project funding is well spent.


Saturday, November 19, 2011

Quality

Reading Cooksey & Caricelli (2005) on Quality, Context and Use : Issues in Achieving the Goals of Metaevaluation.

There are many defined standards of quality and they basically relate to the evaluation models which are used in a study. Each model has a number of criteria under which a study can be evaluated against and therefore any metaevaluation must take into account these quality standards.

That brings me to the question in regards to my own study, what are the quality standards being employed. Since i have created a set of criteria based on a number of different models ie Scriven, Patton, Stufflebeam, Owen, Chesterton and Cummings. Should I be meeting with Stakeholders to find out what they constitute as quality for our context? This may help in turn, in ensuring the evaluation findings are put to better use (Johnson et al., 2005).

Another finding in this study referred to the lack of transparent information in the final reports. I'm finding in my search of the identified data (final project reports) the same thing, a lack of detailed data for a metaevaluation. The reports intended audience is different in the internal projects to the external projects.

The finding is that an organisation should identify what it may need from a metaevaluation and then ensure all evaluations that are conducted will one day be able to be metaevaluated. This would include defining the methodology in detail (for example).


Sunday, September 25, 2011

Lots of Links - more on meta-evaluation


Wondering whether I should relook at the mq projects and contact leaders anyway and see what they wrote in their evaluation methods section of application but then would need to ask Barb if I can get a copy of those applications as they would not be publicly available. And then do I tell people that I have looked at their application before I interview them? Would need to disclose this.

Need to add the word metaevaluation to the tile of phd.

Mark, Henry and julnes 2000 reasons for evaluating. This would be more relevant than the ref I used in my proposal which was 1998 and was only editors notes....maybe?

Add the links to the uni west Michigan website, to my blog for future reference. http://www.wmich.edu/evalctr/checklists/about-checklists/

Usability evaluation report , useful for my useability testing some nice references and a way to layout the findings etc.

Papers to be read
Metaevaluation, stufflebeam
evaluation bias and it's cntrol, Scriven:
Metaevaluation in practice, selection and application of criteria. Cooksy & caracelli
Metaevaluation revisited, Scriven
Metaevaluation as a means of examining evaluation influence, Oliver
A basis for determining the adequacy of evaluation designs, sanders & nafziger
Mandated Evaluation: Integrating the Funder-Fundee Relationship into a Model of Evaluation Utilization
Mayhew, F.       2011 May 9. Mandated Evaluation: Integrating the Funder-Fundee Relationship into a Model of Evaluation Utilization. Journal of MultiDisciplinary Evaluation [Online] 7:16. Available: http://survey.ate.wmich.edu/jmde/index.php/jmde_1/article/view/315/315