Sunday, September 30, 2012

Case Study


Studying cases allows for obtaining an in-depth understanding (through explaining, exploring, and describing) of complex social phenomena, while retaining the holistic and meaningful characteristics of real-life events (Yin 1994).

KOHLBACHER, F.. The Use of Qualitative Content Analysis in Case Study Research. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, North America, 7, jan. 2006. Available at: <http://www.qualitative-research.net/index.php/fqs/article/view/75/153>. Date accessed: 01 Oct. 2012.
Abstract: This paper aims at exploring and discussing the possibilities of applying qualitative content analysis as a (text) interpretation method in case study research. First, case study research as a research strategy within qualitative social research is briefly presented. Then, a basic introduction to (qualitative) content analysis as an interpretation method for qualitative interviews and other data material is given. Finally the use of qualitative content analysis for developing case studies is examined and evaluated. The author argues in favor of both case study research as a research strategy and qualitative content analysis as a method of examination of data material and seeks to encourage the integration of qualitative content analysis into the data analysis in case study research.

Meta-evaluation of methods for selecting projects


Brandon, Paul R. “A Meta-evaluation of Schools’ Methods for Selecting Site-managed Projects.” Studies In Educational Evaluation 24, no. 3 (1998): 213–228.

A meta-evaluation of 17 schools who apply for funding from a state-wide initiative. The authors were interested in finding out how the schools evaluated which projects were put forward for funding application.
There are three types of evaluative efforts required by schools, needs assessment; project evaluation (when searching for the one to best meet the student and school needs); and summative and formative evaluation after project implementation. This study investigates the second type ie school based evaluative efforts and activities used when selecting educational projects to address their identified needs.

The results showed that the extent to which teachers participated in making decisions about both the process and content of needs assessments was positively related to the validity of these decisions. (p.214)
Evaluation criteria came from using the CIPP approach to evaluation (Stufflebeam, 1983). The four criteria are (a) the extent to which all faculty and staff participated in selecting projects, (b) the extent to which school personnel used the appropriate sources of information, (c) the extent to which the schools compared their preferred projects with other available projects before making their final project selections, and (d) the extent to which the schools considered issues of feasibility such as project cost and ease of implementation. (p.216). A fifth criterion was used, based on the belief that projects are most likely to succeed when they are based on theories of education and have been shown to have succeeded elsewhere (Ellis & Fouts, 1993; Slavin, Karweit, & Madden, 1989). These five criteria then supported the five evaluation questions to be asked: (a) To what extent did school personnel participate in project selection? (b) To what extent did the schools collect information about possible projects from the appropriate types of sources? (c) To what extent did the schools compare their preferred projects with others before making final project selections? (d) To what extent did the schools consider project cost and ease of implementation when selecting projects? (e) To what extent were the selected projects based on theory and supported by empirical findings of previous studies? (p.218)

Two data collection methods were used, a self-report survey questionnaire (for first four questions) and a literature review (for fifth question). Findings: results were encouraging for two of the questions (b and c) and not so encouraging for the remaining three (a, d and e). Three categories of problems that the schools had encountered when implementing their projects were identified.
  • inaccurate estimates of project costs.
  • misjudging the managerial, administrative, or logistical requirements of the projects.
  • underestimating the level of staff, parent, or community understanding or~motivation required for successful project implementation.
[useful when writing about my findings from Phase 1]

And the final question highlighted that empirical evidence about project success was not found for about half of the schools. This could be explained because school personnel often know little about proper use of research findings and this should not be the case with HE project proposals.
And Selecting projects is a task that is added to schools' full schedules, and, typically, few faculty and staff are available or willing to participate. [same could be said of HE] However authors recommend that schools should be shown the advantages of allowing as many of their staff and faculty as feasible to participate in project selection. Assistance of this sort would improve the chance that the best projects would be identified to meet needs and would help ensure that project funding is well spent.


An evaluation framework for sustaining the impact of educational development


Hashimoto, Kazuaki, Hitendra Pillay, and Peter Hudson. “An Evaluation Framework for Sustaining the Impact of Educational Development.” Studies In Educational Evaluation 36, no. 3 (2010) 101–110.

The context of this paper is international aid agencies funding of educational development projects in recipient countries and their apparent ineffectiveness. The authors were interested in overcoming donor agencies internal compliance requirements by looking how local evaluation capacity could be developed and also how developments could continue to be sustained after project completion. Although this context is not applicable to the HE sphere, the same could be said of external funding agents vs internal projects.

The authors define process evaluation (quote: DAC Network on Development Evaluation. (2008). Evaluating development cooperation. OECD DAC Network on Development Evaluation. Retrieved January 6, 2010, from http://www.oecd.org/dataoecd/3/56/41069878.pdf.) And state the importance of process evaluation being the involvement of the participants in making decisions on a project such as terminating a project if necessary (p.102.). The authors quote Miyoshi and Stemming (2008) in that most studies on evaluation with participatory approaches are not underpinned by evaluation theories but are method-oriented.

So an Egyptian project was used as a case study (see previous post) and there were two research questions: (1) how can an entire educational development project be evaluated? and (2) how can the capacity development in educational reform be evaluated? Participants included six different groups of stakeholders: funding body, local admin, researchers, teachers, parents and students. The analytic technique used was pattern matching (Yin, 2003, p. 116) to enhance its internal validity. There were three emergent themes to the study, context, outcome and process evaluation.

Outcome evaluation:

  • assessing outcomes is necessary for determining the success of an educational reform project.
  • Outcome evaluation should include local participant involve- ment for evaluating a project since they are the end users.
  • Local stakeholders should not be seen as informants or discussants but rather as evaluators working jointly with aid agencies so they can appreciate the success and failure of achieving the objectives.
  • Results supported the use of an external evaluator who in collaboration with the internal evaluators of the project can undertake a macro level evaluation of the project


Context Evaluation

  • context evaluation assesses local needs and problems to be addressed, cultural, political and financial issues, assists to design a project and sets objectives before the initiation of an educational project. 
  • more local stakeholders such as representatives of local community are needed to join the evaluation to make their voice heard because after all they are the beneficiaries. 
  • This engagement of various stakeholders in dialogues throughout the project from the project design phase may enable their opinions and interests to be considered for designing and implementing a more effective project (House & Howe, 2000).
Process evaluation


  • There was a need for adopting a systematic participatory evaluation approach involving individuals and groups at the different levels of an educational system, which was central to process evaluation. 
  • the linchpin of a sound process evaluation is employing skilled people
  • the practice and culture of process evaluation should be nurtured during the life of educational projects and be institutionalized locally. This has the potential to sustain the impact of projects. 

Conclusion - conventional monitoring and evaluation practices do not have the ability to sustain a project beyond its lifetime. And that 'paradigms should shift from outcome-focused evaluation currently dominated by international donor agencies to process evaluation conducted largely by local participants but also supported by donor agencies. ' (p109)

Framework outlined in picture below (from p.108):
Other articles that follows this line of thinking:
 Donnelly, John. Maximising participation in international community-level project evaluation: a strength-based approach. [online]. Evaluation Journal of Australasia, v.10, no.2, 2010: 43-50. Availability:<http://search.informit.com.au/fullText;dn=201104982;res=APAFT> ISSN: 1035-719X. [cited 01 Oct 12].

Challenging times for evaluation of international development assistance

M Nagao - Evaluation Journal of Australasia, 2006 - aes.asn.au







Saturday, September 29, 2012

Case study methodology

Hashimoto, Pillay, and Hudson, (2010) “An Evaluation Framework for Sustaining the Impact of Educational Development.”

p.103 This study adopted a case-study methodology as it can be separated out for research in terms of time, place, or some physical boundaries (Creswell, 2008). Separating the case was critical as there are ongoing education reform projects happening in Egypt. The procedure was guided by Yin’s (2003) model with the sequenced five steps. These steps were: (i) developing research questions; (ii) identifying the research assumptions; (iii) specify- ing research unit(s) of analysis; (iv) the logical linking of data to the assumptions; and (v) determining the criteria and interpreting the findings. The case study is convenient to illuminate the contextually-embedded evaluation process using multiple data sources. This study used three data sources to triangulate the data. They were: (i) the JICA evaluation reports on the project, (ii) a survey questionnaire, and (iii) interviews with stakeholders. To unravel the two main research questions, the following three sub-research questions were applied to these three data sources: (1) Who should be involved in the evaluation process? (2) When should the evaluation be conducted? (3) Why should the evaluation be conducted? These three questions provides a holistic understand- ing of the key players involved in making decisions, the rationale for the timing of the evaluation activities (investigates the assumption underpinning such timing) and, the justification of the evaluation actions. Such a holistic approach is consistent with Burns (2000) argument that case studies should consider constructs from multiple perspectives in order to develop a deeper and more complete understanding of the constructs.