Showing posts with label evaluation framework. Show all posts
Showing posts with label evaluation framework. Show all posts

Friday, October 30, 2015

Evaluation Planning Instrument - next steps

Presented the development of the tool at ISSOTL this week. Great feedback - comments from audeince indicated that they were waiting for this interactive and would like to see exmples included.

Interest from RMIT - could i go and present to seed grant holders (in Science) on what to do around project evaluation.
Interest from UBC - they need help with this and have some skills to exchange - need to follow up with them to find out if they mean programming skills!

Next steps:

  • update the steps to include feedback from AES and focus group 2.
  • harvest the examples collected from focus groups
  • mind map how the online form could work - including branching
  • plot out in excel using simple logic
  • contact a programmer and think about what this may look like in an online version

Friday, April 24, 2015

Phase 3 planing - focus group

After a few false starts with running a workshop on the planning framework/instrument, finally got 10 people volunteering to attend a focus group.

The aim of the focus group was to test the waters with the framework. Get academics with small L&T projects in a room. Talk them through the different stages and get them to give examples for each step. Then find out what they think of the framework.

I was really excited at the prospect of 10 participants as two previous attempts to run this at MQ had resulted in only 2 and 1 respondents respectively. However luck was not on my side again - the torrential rain and storms in Sydney on the day prevented three from attending. However the focus group went ahead with 7 participants.

2 male 5 female. Four faculties covered included:
Arts and Social Sciences (Journalism; Communication; Education)
Business (Accounting)
Engineering & IT (Civil and Environmental Engineering)
Health (Nursing)

Two of the participants had had large scale (OLT type) L&T grants, one was new to the L&T grant space and others had received numerous small scale L&T grants.

General Observer Comments:

there was a general openness to talking about evaluation and a positive vibe in the room during the discussions particularly when talking about their projects.

A diagram would have helped when describing evaluation and research synergies.

There was also a general understanding of the value of evaluation and a willingness to embrace it in their projects.

However some people felt overwhelmed by the info/framework. Too many steps. This lead to comments regarding 'having to' complete it. And negativity that it would take too much time to complete and if this was required then it would put them off doing evaluation. --> this kid of missed the point (I thought) - as this is meant as a resource to help them formulate their evaluation plan.


I asked for definitions of Evaluation (with respect to L&T projects).


  • quality control - monitoring level of proficiency
  • feedback - monitoring and improving learning outcomes
  • an assessment of a particular project, either qualitative or quantitative
  • feedback re: effectiveness in student learning and efficiency/efficacy in delivery of content and development of student directed expectation
  • a process to determine if the stated projects aims and objectives were achieved and if not what we can learn from it
  • finding out it it meets the intended purpose of the project
  • identify what works/needs review
  • comparison of outcome against objectives (designed at beginning of project)


I also asked them to say how they 'felt' about evaluation. Words and phrases included:

  • useful to gather "lessons learned"
  • essential part of project feedback loop
  • must be multidimensional
  • necessary useful tool
  • useful if done halfway through project rather than just at the end
  • intrusive on time
  • useful at critical stages for modification of delivery
  • great! I love to see how its evolved, turned out, even if its a catastrophe!
  • its required to improve subject quality
  • I welcome evaluation as long as its not an unwieldy or unnecessarily complex process
  • I see it as a critical and integral part of any project


Then at the end of the session I asked them to reconsider evaluation and write whether their thoughts or feelings about evaluation had changed since the beginning.

  • two people said no change
  • four stated they were now more aware of different evaluation foci/purposes
  • one stated they were more 'dispirited' because of the quantity of work required to complete the framework


Next steps:
1. Transcribe the audio and analyse data.
2. Transpose examples given by participants in the 'workshop' section to the framework document
3. Think about which steps could be reduced by either combining or removing.
4. Run the session again with the revised framework.



Tuesday, February 4, 2014

Update Jan 2014

Agenda for this month is to update the gantt chart - where are we what are we adjusting/modifying: Acting as a project evaluator I am able to test out my framework - it may be useful to include this as a paper in my study as findings will certainly impact on stage 3. Actually if i think about it, that was what my original prposal suggested - that i use the framework to evaluate 3 local projects. I didn't end up doing that as it wasn't my role. Phase 2 has turned into a triangulation/measure of the findings from phase 1 and the lit review.
Thoughts on triangulation: use of three case studies offering an in depth look at the findings from phase one and the literature review - contextualising what's happening. Deeper richer insights to the findings.

There is a need to engage people and bring them into the process of evaluation - what needs to change? what comes next - this is the contribution to the knowledge.

Also need to think about publications and budget spending for this year: aiming for one new publication ie not including the article that went to JFHE. Need to target a journal for the case study manuscript.

Thursday, October 24, 2013

Brown's 8 questions

Planning the next paper (4):
  1. Who are the intended readers: grant bodies/grant application reviewers/ grant applicants
  2. What did you do: Three recipients of locally funded learning and teaching project grants were 'followed' as they conducted their research projects over 12-18 months. There were three interviews with the project managers (beginning middle and end of project). And one interview with the project leaders (the person who applied for the grant and had the initial idea). Other project documentation was also used as evidence / data and project meetings were attended were relevant. The questions which guided the data collection were developed from the literature.
  3. Why did you do it: To collect evidence as it was being generated rather than retrospectively when people have to 'recall' information which may be influenced by their perception of evaluation and other unknown factors. To see how perceptions changed over the course of the project and what influencing factors caused these changes.
  4. what happened: all three had different experiences and perceptions of evaluation to begin with. This appeared to dictate how they conducted the evaluations and what they evaluated and what they did with any evaluation data.
  5. What do the results mean in theory: don't know at this stage....
  6. What do the results mean in practice: production of guidelines for project evaluation for/from the grant funding body. Development of a framework to guide project evaluation could arise from the findings.
  7. What is the key benefit for readers: the grant bodies (ie application reviewers) could use the findings to guide their application and review processes. Academics and project leaders could use this framework to support them in their learning through the project and enable the institution to benefit more widely from the projects. 
  8. What remains unsolved: How this framework could best be implemented, feedback from use of the framework.



Friday, October 12, 2012

A Project Profile Approach to Evaluation

Accountability is a common driver for evaluation particularly as funding bodies strive to obtain measurable gains for their investments, in teacher content knowledge, change in practice and of course student learning. The authors of this study insist that individual project profiles are needed to take into account the unique contextual variables of a project whilst comparing projects across a funded program.

The context for this study was to look at professional development of teachers in the K-12 sector under the  Improving Teacher Quality State Grants Program. Each grant recipient is required to conduct some internal evaluation processes and also to be part of an external evaluation. This paper reports on the design of the latter. Goals were:

(1) how well projects attained their objectives; 
(2) the quality of the PD that was delivered, and 
(3) what outcomes were achieved for teachers and students.

Nine projects were investigated and a profile for each was constructed. The profile consisted of 6 sections: Project background; Project Design; participants and their schools; Quality of implementations; satisfaction survey; Outcomes and Recommendations. In other words do not comapare outcomes alone since the the teachers and the school settings can vary significantly across school sites and therefore outcomes alone do not tell the whole story.

Then the model of using project profiles was compared to a model for evaluating professional development programs (Guskey, 2000). Guskey's hierarchical model includes 6  levels, moving from the simple to the more complex: 
1. Participants' reactions
2. Participants' learning
3. Organization support and change
4. Participants' use of new knowledge and skills
5. Student learning outcomes


The authors mapped their model and found that they needed to modify Guskey's model to make it  more holistic. They created a central core with steps 1,2,4 and 5 each fed by step 3. And then an outer layer of content, context and process. (p.152)

Sunday, September 30, 2012

An evaluation framework for sustaining the impact of educational development


Hashimoto, Kazuaki, Hitendra Pillay, and Peter Hudson. “An Evaluation Framework for Sustaining the Impact of Educational Development.” Studies In Educational Evaluation 36, no. 3 (2010) 101–110.

The context of this paper is international aid agencies funding of educational development projects in recipient countries and their apparent ineffectiveness. The authors were interested in overcoming donor agencies internal compliance requirements by looking how local evaluation capacity could be developed and also how developments could continue to be sustained after project completion. Although this context is not applicable to the HE sphere, the same could be said of external funding agents vs internal projects.

The authors define process evaluation (quote: DAC Network on Development Evaluation. (2008). Evaluating development cooperation. OECD DAC Network on Development Evaluation. Retrieved January 6, 2010, from http://www.oecd.org/dataoecd/3/56/41069878.pdf.) And state the importance of process evaluation being the involvement of the participants in making decisions on a project such as terminating a project if necessary (p.102.). The authors quote Miyoshi and Stemming (2008) in that most studies on evaluation with participatory approaches are not underpinned by evaluation theories but are method-oriented.

So an Egyptian project was used as a case study (see previous post) and there were two research questions: (1) how can an entire educational development project be evaluated? and (2) how can the capacity development in educational reform be evaluated? Participants included six different groups of stakeholders: funding body, local admin, researchers, teachers, parents and students. The analytic technique used was pattern matching (Yin, 2003, p. 116) to enhance its internal validity. There were three emergent themes to the study, context, outcome and process evaluation.

Outcome evaluation:

  • assessing outcomes is necessary for determining the success of an educational reform project.
  • Outcome evaluation should include local participant involve- ment for evaluating a project since they are the end users.
  • Local stakeholders should not be seen as informants or discussants but rather as evaluators working jointly with aid agencies so they can appreciate the success and failure of achieving the objectives.
  • Results supported the use of an external evaluator who in collaboration with the internal evaluators of the project can undertake a macro level evaluation of the project


Context Evaluation

  • context evaluation assesses local needs and problems to be addressed, cultural, political and financial issues, assists to design a project and sets objectives before the initiation of an educational project. 
  • more local stakeholders such as representatives of local community are needed to join the evaluation to make their voice heard because after all they are the beneficiaries. 
  • This engagement of various stakeholders in dialogues throughout the project from the project design phase may enable their opinions and interests to be considered for designing and implementing a more effective project (House & Howe, 2000).
Process evaluation


  • There was a need for adopting a systematic participatory evaluation approach involving individuals and groups at the different levels of an educational system, which was central to process evaluation. 
  • the linchpin of a sound process evaluation is employing skilled people
  • the practice and culture of process evaluation should be nurtured during the life of educational projects and be institutionalized locally. This has the potential to sustain the impact of projects. 

Conclusion - conventional monitoring and evaluation practices do not have the ability to sustain a project beyond its lifetime. And that 'paradigms should shift from outcome-focused evaluation currently dominated by international donor agencies to process evaluation conducted largely by local participants but also supported by donor agencies. ' (p109)

Framework outlined in picture below (from p.108):
Other articles that follows this line of thinking:
 Donnelly, John. Maximising participation in international community-level project evaluation: a strength-based approach. [online]. Evaluation Journal of Australasia, v.10, no.2, 2010: 43-50. Availability:<http://search.informit.com.au/fullText;dn=201104982;res=APAFT> ISSN: 1035-719X. [cited 01 Oct 12].

Challenging times for evaluation of international development assistance

M Nagao - Evaluation Journal of Australasia, 2006 - aes.asn.au







Saturday, September 29, 2012

Case study methodology

Hashimoto, Pillay, and Hudson, (2010) “An Evaluation Framework for Sustaining the Impact of Educational Development.”

p.103 This study adopted a case-study methodology as it can be separated out for research in terms of time, place, or some physical boundaries (Creswell, 2008). Separating the case was critical as there are ongoing education reform projects happening in Egypt. The procedure was guided by Yin’s (2003) model with the sequenced five steps. These steps were: (i) developing research questions; (ii) identifying the research assumptions; (iii) specify- ing research unit(s) of analysis; (iv) the logical linking of data to the assumptions; and (v) determining the criteria and interpreting the findings. The case study is convenient to illuminate the contextually-embedded evaluation process using multiple data sources. This study used three data sources to triangulate the data. They were: (i) the JICA evaluation reports on the project, (ii) a survey questionnaire, and (iii) interviews with stakeholders. To unravel the two main research questions, the following three sub-research questions were applied to these three data sources: (1) Who should be involved in the evaluation process? (2) When should the evaluation be conducted? (3) Why should the evaluation be conducted? These three questions provides a holistic understand- ing of the key players involved in making decisions, the rationale for the timing of the evaluation activities (investigates the assumption underpinning such timing) and, the justification of the evaluation actions. Such a holistic approach is consistent with Burns (2000) argument that case studies should consider constructs from multiple perspectives in order to develop a deeper and more complete understanding of the constructs.