Showing posts with label LIT review. Show all posts
Showing posts with label LIT review. Show all posts

Thursday, March 7, 2013

Collecting Case Study Evidence

There are six major sources of evidence discussed in this chapter of Yin (2009). 

Documentation. Such as letters, emails, diaries, agendas, meeting minutes, proposals, progress reports, formal studies or evaluations of the same case and news clippings or other relevant media articles. Documents cannot be accepted as literal recordings of events but their value is in corroborating and augmenting  evidence from other sources. Inferences can be made but must be backed up by observed data, remembering that documents are written for a specific audience and purpose.

Archival records.  Such as census or statistical data, service records, budgets or personnel records, maps and charts, survey data about a particular group. Again these would have been produced for a specific purpose and audience so this must be taken into account when determining the usefulness of such data.

Interviews. These should take the form of guided conversations rather than structured queries. Yin stresses the importance of not asking interviewees the 'why' questions of your study directly but asking 'how'. This has a much softer, less accusatory tone. There are different types of interview, the in-depth and the focused interview. The former asks participants about the facts of the matter as well as their opinion of events. This obviously needs to be corroborated by other sources and also it is wise to search of contrary evidence to strengthen these findings. In the focused interview, you follow a certain set of questions derived from the case study protocol. Care must be taken not to ask leading questions. A third type of interview consists of a more structured questions along the lines of a survey. For all types of interview though, care must be taken with the fact that responses are subject to problems of bias, poor recall and poor or inaccurate articulation. As such, all interview data should be corroborated with other sources.

Direct Observation. This provides opportunities for gathering additional data about the topic under study. multiple observers can increase the reliability of this method though not always a practical option.

Participant-observation. Similar to the previous method, but with the advantage of a more in-depth understanding of the case due to taking on a role within the scenario. There are problems associated with this method however and these are related to potential biases that may occur in getting too involved. There is also the situation where the time to participate leaves little time to observe.

Physical artifact. These can be collected or observed and could include such items as a tool, a work of art, a printout of computer statistics such as usage etc. They do have less potential relevance in most typical case studies but can be used as supplementary evidence sources.

During collection of data from any of these six methods, there are three principles which can maximise the benefits of the data collection phase by dealing with the problem of construct validity and reliability of the case study evidence.



1. Use multiple sources of evidence. This has the advantage of using converging lines of enquiry, a process of triangulation and corroboration. Construct validity is addressed since 'multiple sources of evidence provide multiple measures of the same phenomenon'.(p117)



2. Create a case study database. This concerns how we organise and document the data collected and if done well (systematically) can increase the reliability of the case study. The aim is to allow independent investigation of the data should anyone want to query the findings or final case study report. There are four possible components to a database. 

  • case study notes - these may come from interviews, observations, document analysis etc but they must be organised in such a way that they can be easily accessed later on by external parties (and yourself).
  • case study documents - this can be weildly  but annotated bibliography of such documents can be useful. another method is to cross-reference documents in interview notes (say). As previously these must be organised for easy access.
  • tabular materials - this could be a collection of quantitative results such as survey, observational counts or archival data.
  • narratives - this is the case study investigators open-ended answers to the case study protocol's questions. Each answer represents an attempt to integrate the available evidence and to converge upon the facts of the matter or the tentative interpretation.


3. Maintain a chain of evidence. This will allow an external observer to follow the derivation of any evidence from the initial research questions to ultimate case study conclusions and thus increase the reliability of the case study. The chain runs from questions to protocol to evidence to database to final report, each linked forward and backwards (see p123 for a diagram).

Sunday, January 27, 2013

Another lit review

Parylo, O. (2012). Evaluation of educational administration: A decade review of research (2001–2010). Studies in Educational Evaluation, 38(3–4), 73–83. doi:10.1016/j.stueduc.2012.06.002

This long article (>10,000 words) which reviews the literature and then concludes by calling for a further research study into the topic due to the lack of publications is exactly what I was hoping to do.
There is an introduction, overview of the topic (literature), methodology (which includes list of data sources and method used). The findings section includes a summary of 8 evaluation journals and the types /topics of articles included therein followed by a summary of the relevant articles and then thematic trends. The discussion section is just 4 paras and the implications just 2 paras and significance and conclusion 2 paras. The section on limitations describes some good points and I feel this is an area missing from my paper. It mentions that unpublished data is overlooked, coding only done by one researcher hence researcher bias. In the implications section, a call is made for further research into the foci of educational evaluation and their purposes. The evaluation due to grant requirements is mentioned and a call is made 'to better understand how program evaluation is being used in education and what should be done to improve its effectiveness' p.81.
I particularly like how the method is referenced (p76):
The type and purpose of program evaluation articles were determined according to the classification of Stufflebeam (2001). Overall, this analysis used common strategies of qualitative content analysis (as summarized by Romanowski, 2009): (1) careful examination of the textual data (i.e., published articles); (2) data reduction (i.e., selecting those articles that would help to answer research questions); (3) organizing condensed data (i.e., organizing the articles in groups); and (4) revisiting the data to confirm the findings.


Saturday, November 10, 2012

The write/right journal

I'm going to try and publish my literature review but the question i'm now finding hard to answer is which journal. The suggestion was to target one of those journals mentioned in the paper. So for each of these I'm checking the journal requirements, particularly the length of papers accepted since this one is particularly long. Summaries of journals are on this page. Studies in evaluation seems like a good possibility, they have really long articles (over 9000 words) and 4 of the articles in my paper came from this journal. Another possibility is Assessment and Evaluation in Higher Education but they limit articles to between 3000 -5000 words. I'm also interested in the journal of multidisciplinary evaluation which has long articles - this is open source and states they have a quick turn around for feedback. They suggest a limit of 10-12 pages though they do accept longer articles.


Saturday, October 27, 2012

Writing

I'm currently writing up my lit review, which is taking longer than I expected. Initially I had planned to conduct a review of the literature as i progressed through the PhD. However we have discussed wring this up as a paper defining why a study of the nature I proposed for my PhD is needed. ie a rational if you like.
One of my issues with this is that there will be more literature as I go and so this can't be one defining document. Marina suggested I publish the table in which I'm synthesising findings in the papers, on a public website so it can be constantly updated. I like this idea.
On a writing retreat this week I showed my writing group the efforts so far and I have to say I was a bit disappointed with the feedback at first. The general opinion was that I shouldn't try and publish this lit review as most journals would not be interested in it as after all who am I to tell the world..? They suggested I keep writing it up but that it becomes a chapter in the Thesis rather than a journal article. That's actually fine and I'll discuss this with my supervisors next week.

So where I am at right now is trying to weave the themes that have come from my searching, with the relevant items from the articles and then craft a suitable reflection/discussion and conclusion.
Themes I'm working with are:

Formative vs. summative evaluation
Building capacity in evaluation skills
Non-completion of project elements (such as evaluation, reports and the project itself)
Non-usage of evaluation findings
Importance of stakeholder involvement
Inaccurate initial expectations (could be linked to #2)
Importance of planning and defining clear evaluation criteria
Evaluation approaches, models or frameworks



Tuesday, October 23, 2012

Theory-based evaluation


Nesman, Batsche & Hernandez (2007)
Theory-based evaluation of a comprehensive Latino education initiative: An interactive evaluation approach

This paper describes a 5 year initiative to develop, implement and evaluate program(s) that would increase Latino student access to Higher Education. Theory of change and logic models were used to guide the program as these have been previously shown to be most effective when trying to create social change within  comprehensive community initiatives. 'Logic models can also serve as a roadmap for implementers to move from ideas to action by putting components together into a visual framework (Hernandez & Hodges, 2003).' (p.268)
A conceptual model was developed which incorporated context, guiding principles, implementation strategies, outcomes and  evaluation and resulted in a vision statement for the program. The paper also describes the interventions which were to be implemented, and goes on to describe the evaluation approach in more detail. They us an embedded case-study design (Yin, 1984) and mixed methods with a developmental approach which allowed for adaptation over time as the project moved through the varying stages of completion. Key questions were developed associated with each goal from the funding agency ie Process, Impact and Sustainability. One of the key findings under process was that the initial plan 'had been overly ambitious and that it would not be possible to accomplish this large number of interventions with the available resources.' (p.272). This resulting in a paring back of outcomes with some initiatives being prioritised and some being dropped altogether. A finding under Impact was that 'although it would be several years before long term outcomes could be effectively measured, the evaluators developed a tracking system to monitor changes in student outcomes each year.' (p.274). With sustainability, it was felt that strategies within institutions were more likely to be sustained than those relying on collaboration and cross-institutional coordination, unless there was ongoing external support. (p.279)
The authors also wrote about lessons learned from this approach. If theory-based evaluation is to be maximised, it does require  training of program participants on logic model development and theory of change approaches early in the process of implementation. This training can lead to the development of interactive and productive relationships between evaluators and implementers. Adopting a developmental approach was also highly beneficial in this project.

Sunday, October 21, 2012

The Multiattribute Utility (MAU) approach


Stoner, Meadan, Angell and Daczewitz (2012)
Evaluation of the Parent-implemented Communication Strategies (PiCS) project using the Multiattribute Utility (MAU) approach



The Multiattribute Utility (MAU) approach was used to evaluate a project federally funded by the Institute of Education Sciences. The purpose of the evaluation was a formative one, measuring the extent to which the first two (of 3) goals of the project were being met and was completed after the 2nd year of the project. The project goals were:
(a) develop a home-based naturalistic and visual strategies intervention program that parents can personalize and implement to improve the social-pragmatic communication skills of their young children with disabilities;
(b) evaluate the feasibility, effectiveness, and social validity of the program; and
(c) disseminate a multimedia instructional program, including prototypes of all materials and methods that diverse parents can implement in their home settings.
MAU was chosen as an approach because it was participant oriented, allowing the parents representatives to have a voice in the evaluation. There are 7 steps for a MAU evaluation and each is discussed in the paper.
1.     Identify the purpose of the project
2.     Identify relevant stakeholders (these individuals will help make decisions about the goals and attributes and their importance)
3.     Identify appropriate criteria to measure each goal and attribute
4.     Assign importance weights to the goals and attributes
5.     Assign utility-weighted values to the measurement scales of each attribute
6.     Collect measurable data on each attribute being measured
7.     Perform the technical analysis
An important item to note under item 3 was that it is important to identify essential attributes within each goal area, not to identify a set number of attributes. For this project, 28 attributes were defined by the stakeholders and 25 were actually found to be met through the evaluation.
For this project the MAU approach was found to be in keeping with one of the core values of the project, that of stakeholder involvement. Four primary benefits of using this approach were identified and one concern. The MAU
(a) was based on the core values of the PiCS project;
(b) engaged all stakeholders, including parents, in developing the evaluation framework;
(c) provided a certain degree of objectivity and transparency; and
(d) was comprehensive.
The primary concern was the length of time and labour required to conduct the evaluation. For this reason the authors believe it may not be applicable for evaluating smaller projects. 

Saturday, October 20, 2012

Archipelago approach


Lawrenz & Huffman, (2002)
The Archipelago Approach To Mixed Method Evaluation

This approach likens the different data collection methods to groups of islands; all interconnected ‘underwater’ by the underlying ‘truth’ of the program. This approach has its advantages since it is often difficult to uncover the complete ‘truth’ so using a combination of data types and analysis procedures can facilitate it. The authors quote Green & Caracelli (1997) and their three stances to mixing paradigms, the purist, pragmatic and dialectical stances then attempt to map their archipelago approach to each of these three stances. Thereby opposing Green and Caracelli’s view that the stances are distinct, in fact the authors believe the their metaphor allows for simultaneous consideration and thus provides a framework for integrating designs.

A nationally funded project is evaluated using the archipelago approach to highlight its benefits. Science teachers in 13 high schools across the nation were recruited and consideration was made to the level of mixing the methods in an area that traditionally used a more ‘logical-positivist’ research approach. So three different approaches were used:
1.     Quasi-experimental design – both quantitative and qualitative assessments of achievement. About half of the evaluation effort in terms of time and money were spent on this approach. This was pragmatic as it was included to meet the needs of stakeholders.
2.     A social interactionism approach – gathered data through site visits to schools and classrooms and observations made through open-ended field notes and this data produced narratives descriptions of each site. About one third of the evaluation effort focused on this approach.
3.     A phenomenological study of six of the teachers during implementation of the new curriculum via in-depth interviews.
The archipelago approach extends the idea of triangulation, which is linear to take into account the complex, unequally weighted and multi-dimensional manner. When considering the underlying truth about the effectiveness of the program, achievement was viewed as likely to be the strongest indicator and therefore most effort went into this approach. The learning environment was considered the next strongest indicator and the teacher’s experience as the least.
‘This approach created a way for the authors to preserve some unique aspects of each school while at the same time considering that the schools were linked in some fundamental way’. (p.337)It is hoped that this approach can lead evaluators to think less in either/or ways about mixing methods and more in complex integrative ways.

Another Website?

At my supervision session this week, we have been talking about the possibility of creating a public website to publish the lit review. A website would become a useful resource, allow public comment and become the place where I could show my results particularly if i go on to develop an interactive evaluation instrument.

The question is, can this website act as a publication? I could then refer to the table (which is growing unwieldy for a word doc)  when I write the paper on the literature synthesis. Questions to ponder

Evaluation Capacity Building


Preskill, H., & Boyle, S. (2008). A Multidisciplinary Model of Evaluation Capacity Building. American Journal of Evaluation 29(4), 443–459.


Evaluation Capacity Building (ECB) has become a hot topic of conversation, activity and study in recent times. This paper offers a comprehensive model for designing and implementing ECB activities and processes. The model draws on the fields of evaluation, organisational learning and change and adult learning.
Trigers for ECB usually come from external demands (accountability, environmental change, policies and plans) or internal needs (organisational change, mandate from leadership, perceived lack of knowledge and skills, increased funding, perceived shortage of evaluators, desire to improve programs).
Assumptions are also required and may include: '(a) organization members can learn how to design and conduct evaluations, (b) making learning intentional enhances learning from and about evaluation, and (c) if organization members think evaluatively, their programs will be more effective' (p.446).
And expectations of any ECB effort may include:
  • Evaluations will occur more frequently 
  • Evaluation findings will be used more often for a variety of purposes (including program improvement, resource allocations, development of policies and procedures, current and future programming, and accountability demonstrations)
  • Funders will be more likely to provide new, continuing, and/or increased resources
  • The organization will be able to adapt to changing conditions more effectively
  • Leaders will be able to make more timely and effective decisions
  • The organization will increase its capacity for learning


The paper looks at each of the 10 teaching and learning strategies (from the inner left circle), but then goes on to stress the importance of the design and implementation of any initiative. With Design, the following are of importance: 
  • identifying ECB participants characteristics - need to assess the evaluation competence of potential participants
  • determining available organisational resources - including facilitation and time
  • relevant evaluation, learning and individual and organisational change theories
  • ECB objectives - cognitive, behavioural and affective
** some good references here on improving attitudes towards evaluation and reducing stress and anxiety around evaluation***

There is a section on transfer of learning and acknowledgement that dialogue, reflection and articulating clear expectations for what and how to transfer knowledge and skills are critical for longer term impacts of ECB (p453).

In terms of sustainable practice, the right circle of the model is described in more detail, with each of the 8 elements discussed. And finally the diffusion element is explored. The authors have used a water or reverberations to depict emanation from the organisation.

They conclude that for ECB to be transformational, efforts must be intentional, systematic and sustainable. (p. 457)




Sunday, September 30, 2012

An evaluation framework for sustaining the impact of educational development


Hashimoto, Kazuaki, Hitendra Pillay, and Peter Hudson. “An Evaluation Framework for Sustaining the Impact of Educational Development.” Studies In Educational Evaluation 36, no. 3 (2010) 101–110.

The context of this paper is international aid agencies funding of educational development projects in recipient countries and their apparent ineffectiveness. The authors were interested in overcoming donor agencies internal compliance requirements by looking how local evaluation capacity could be developed and also how developments could continue to be sustained after project completion. Although this context is not applicable to the HE sphere, the same could be said of external funding agents vs internal projects.

The authors define process evaluation (quote: DAC Network on Development Evaluation. (2008). Evaluating development cooperation. OECD DAC Network on Development Evaluation. Retrieved January 6, 2010, from http://www.oecd.org/dataoecd/3/56/41069878.pdf.) And state the importance of process evaluation being the involvement of the participants in making decisions on a project such as terminating a project if necessary (p.102.). The authors quote Miyoshi and Stemming (2008) in that most studies on evaluation with participatory approaches are not underpinned by evaluation theories but are method-oriented.

So an Egyptian project was used as a case study (see previous post) and there were two research questions: (1) how can an entire educational development project be evaluated? and (2) how can the capacity development in educational reform be evaluated? Participants included six different groups of stakeholders: funding body, local admin, researchers, teachers, parents and students. The analytic technique used was pattern matching (Yin, 2003, p. 116) to enhance its internal validity. There were three emergent themes to the study, context, outcome and process evaluation.

Outcome evaluation:

  • assessing outcomes is necessary for determining the success of an educational reform project.
  • Outcome evaluation should include local participant involve- ment for evaluating a project since they are the end users.
  • Local stakeholders should not be seen as informants or discussants but rather as evaluators working jointly with aid agencies so they can appreciate the success and failure of achieving the objectives.
  • Results supported the use of an external evaluator who in collaboration with the internal evaluators of the project can undertake a macro level evaluation of the project


Context Evaluation

  • context evaluation assesses local needs and problems to be addressed, cultural, political and financial issues, assists to design a project and sets objectives before the initiation of an educational project. 
  • more local stakeholders such as representatives of local community are needed to join the evaluation to make their voice heard because after all they are the beneficiaries. 
  • This engagement of various stakeholders in dialogues throughout the project from the project design phase may enable their opinions and interests to be considered for designing and implementing a more effective project (House & Howe, 2000).
Process evaluation


  • There was a need for adopting a systematic participatory evaluation approach involving individuals and groups at the different levels of an educational system, which was central to process evaluation. 
  • the linchpin of a sound process evaluation is employing skilled people
  • the practice and culture of process evaluation should be nurtured during the life of educational projects and be institutionalized locally. This has the potential to sustain the impact of projects. 

Conclusion - conventional monitoring and evaluation practices do not have the ability to sustain a project beyond its lifetime. And that 'paradigms should shift from outcome-focused evaluation currently dominated by international donor agencies to process evaluation conducted largely by local participants but also supported by donor agencies. ' (p109)

Framework outlined in picture below (from p.108):
Other articles that follows this line of thinking:
 Donnelly, John. Maximising participation in international community-level project evaluation: a strength-based approach. [online]. Evaluation Journal of Australasia, v.10, no.2, 2010: 43-50. Availability:<http://search.informit.com.au/fullText;dn=201104982;res=APAFT> ISSN: 1035-719X. [cited 01 Oct 12].

Challenging times for evaluation of international development assistance

M Nagao - Evaluation Journal of Australasia, 2006 - aes.asn.au







Saturday, August 25, 2012

Effective Dissemination


Deborah Southwell, Deanne Gannaway, Janice Orrell, Denise Chalmers & Catherine Abraham (2010): Strategies for effective dissemination of the outcomes of teaching and learning projects, Journal of Higher Education Policy and Management, 32:1, 55-67http://dx.doi.org/10.1080/13600800903440550

Came across this article and found some useful information in it, and its related ALTC (2005) report with same name. The report has been reviewed in 2011 and is now in the OLT repository, here

Summary:

The paper looks at a range of Australian and International funding bodies who support L&T projects and asks the question 'How can a national institute for Learning and teaching in higher education maximise the likelihood of achieving large-scale change in teaching and learning across the Australian higher education sector, especially thorough its grants program and resource repository and clearing house?'(p.58)
Now whilst this covers national schemes, there are some important findings that could be applied to smaller scale projects and schemes such as the one I'm looking at and there are also some interesting findings on evaluation. Furthermore the ALTC report identifies some other international funding bodies which i could follow up on in terms of evaluation requirements etc.
Questions from the guiding framework of the project team are included and a couple would be relevant to my study:
What are the influences on and factors affecting a teacher's decision to make use of a project pr process that is being scaled up? What local and external factors facilitate or create barriers affecting the teachers decisions? What is the relationship between the development of local capacity and the quality of external reform structures? (p.59)

The projects which were selected came from a range of locations ut focussed more on projects that sought to change teaching and learning processes, practices and beliefs rather than on projects that focussed on developing products. (p60.)

Some Findings:
Two items of note were:
1. Initiation of an innovation - Intended users of an innovation need to be engaged very early int he planning stages of the innovation in an endeavour to ensure adoption and take up of ideas later on.(p.61) This can be extrapolated to my project in terms of getting stakeholders involved from the early stages and to getting uptake of evaluation findings.
2. Implementation, embedding and upscaling of innovation - one influence on this comes from the personal conception of teaching (p.62). I'm suggesting that the same is true for evaluation. Also, Academics with teaching qualifications appear to be more open to investigation of alternate curriculum and teaching approaches. (Lueddeke, 2003). It would be interesting to test and see if this is the case with evaluation ie does the qualified teacher value evaluation mechanisms more and therefore employ them in the projects in comparison with academics without teaching quals. and if this is the case is that because they have been 'taught' evaluation methods and skills and therefore their conception is different?

Conditions for successful dissemination:
  1. effective, multi-level leadership & management
  2. climate of readiness for change
  3. availability of resources
  4. comprehensive systems in institutions and funding bodies
  5. funding design
This last point is of interest - it mentions that expectations about the approaches to projects and activities that ought to be adopted are taken from the funding design. so i could say that if there is no  real expectation of evaluation requirements then of course the project applicants are not going to be that stringent. - can link to this in the analysis of phase one.

One final mention in the conclusion (p.65) says 'An important aspect of this study was to identify abd to recommend that learning and teaching grant recipients must be supported and provided with access to experts in educational innovation and evaluation.'


The 2005 Carrick report goes into much more detail and evaluation is mentioned throughout. It also gives recommended strategies for each of the 5 'conditions', at the national, institution and discipline level. when i read through look like they were all implemented at MQ  ie recommending standard requirements for application, including description of which evaluation strategies will be used. 

FOr item 3, the report recommends providing project teams with access to specialist expertise - this could be for evaluation or at the least evaluation resources. Findings on p.55 state that 'those responsible for the project may require assistance in designing an appropriate evaluation process'.also info on this on page 45 (emerging themes)

For item 4 it mentions that support for quality processes, particularly monitoring and evaluation ought to be supplied. Also that evaluation is reported within an evaluation framework. Also, on p.58 there are findings that state that institutions that allocated funders AFTER the projects were finished were evaluated well and regularly and were eventually embedded within an institution. 'Generally, however, experiences quoted int he literature and in case studies evidenced poor quality of evaluation if done at all.' It then went on to explain that frequently dissemination across institutions occurred before it was apparent that there was any value or improvement instudent learning and therefore impact analysis was vital.

The 2011 review of this project didn't seem to add much more in regards to Evaluation other than a recommendation that external evaluation reports be provided publicly. There was a reference to Dow (2008) which i should follow up and could provide some useful data.
An evaluation of the Australia Learning and Teaching Council 2005-2008. Sydney: Australian Learning and Teaching Council.




What can be evaluated?

The following excerpt is taken from: Johannessen, E.M. (2001). Guidelines for evaluation of education projects in emergency situations. The Norwegian Refugee Council. http://toolkit.ineesite.org/toolkit/INEEcms/uploads/1039/Guidelines_for_Evaluation_of_Educ_Projects.PDF


5 WHAT CAN BE EVALUATED?
It is common in many evaluations to focus on the OECD criteria: relevance and fulfillment of objectives, development efficiency, effectiveness, impact and sustainability. Relevance refers to whether the project addresses the needs of the beneficiaries, efficiency to how productively the resources have been used while effectiveness relates to whether the objectives of the project were achieved. Impact is the long-term effect  while sustainability relates to the maintenance of the changes after the project or program has been terminated (Dale 1998, Lewin 1993, Samset 1993).   Sida adds "lessons learned" to the criteria which refer to important general experience the evaluation has yielded. All or some of the criteria are commonly addressed in evaluations of education projects although the terms may be used differently.
Although the concepts seem relevant and reasonable, there are several problems associated
with them. First of all, they need to be defined and specified in terms of identifiable activities when the project is being planned. This is often not the case. Secondly, the concepts "relevance", "impact" and "sustainability" may be difficult to investigate. The project may have been going on for a too short period and/or there is not enough time available to find answers to this type of questions.  Another objection is that the OECD concepts are related to final products and not to description of the processes that have taken place. No projects/programs, whether in research or evaluations, develop neatly in a linear way. The description of the project and the steps we plan in the beginning are always transformed as they are being implemented. Major decisions regarding adjustments and turning points should be described and justified and be part of the evaluation.

I think it will be useful in the next paper when I'm analysing the themes and relating them back to the literature.

The following excerpt is from p12. The author uses the terminology 'Terms of Reference' (ToR) to describe to evaluation plan. This is in line with other articles and will fit into the paper on lit review/gaps in the lit.

Often the terms of reference are too vast and ambitious taken the time and resources into
consideration. Sometimes questions and topics are suggested that are not possible to answer, at least not within the limited time available. As a result the evaluators know already before they start working that it is not feasible to follow the ToR
And again on p18:
8 WHEN TO EVALUATE
Ideally the planning of an evaluation should start parallel with the activities/project. But more
often the opposite is the case; the coordinators start too think about evaluation when the
project  is concluded or about to finish.