Showing posts with label evaluation. Show all posts
Showing posts with label evaluation. Show all posts

Saturday, August 25, 2012

Effective Dissemination


Deborah Southwell, Deanne Gannaway, Janice Orrell, Denise Chalmers & Catherine Abraham (2010): Strategies for effective dissemination of the outcomes of teaching and learning projects, Journal of Higher Education Policy and Management, 32:1, 55-67http://dx.doi.org/10.1080/13600800903440550

Came across this article and found some useful information in it, and its related ALTC (2005) report with same name. The report has been reviewed in 2011 and is now in the OLT repository, here

Summary:

The paper looks at a range of Australian and International funding bodies who support L&T projects and asks the question 'How can a national institute for Learning and teaching in higher education maximise the likelihood of achieving large-scale change in teaching and learning across the Australian higher education sector, especially thorough its grants program and resource repository and clearing house?'(p.58)
Now whilst this covers national schemes, there are some important findings that could be applied to smaller scale projects and schemes such as the one I'm looking at and there are also some interesting findings on evaluation. Furthermore the ALTC report identifies some other international funding bodies which i could follow up on in terms of evaluation requirements etc.
Questions from the guiding framework of the project team are included and a couple would be relevant to my study:
What are the influences on and factors affecting a teacher's decision to make use of a project pr process that is being scaled up? What local and external factors facilitate or create barriers affecting the teachers decisions? What is the relationship between the development of local capacity and the quality of external reform structures? (p.59)

The projects which were selected came from a range of locations ut focussed more on projects that sought to change teaching and learning processes, practices and beliefs rather than on projects that focussed on developing products. (p60.)

Some Findings:
Two items of note were:
1. Initiation of an innovation - Intended users of an innovation need to be engaged very early int he planning stages of the innovation in an endeavour to ensure adoption and take up of ideas later on.(p.61) This can be extrapolated to my project in terms of getting stakeholders involved from the early stages and to getting uptake of evaluation findings.
2. Implementation, embedding and upscaling of innovation - one influence on this comes from the personal conception of teaching (p.62). I'm suggesting that the same is true for evaluation. Also, Academics with teaching qualifications appear to be more open to investigation of alternate curriculum and teaching approaches. (Lueddeke, 2003). It would be interesting to test and see if this is the case with evaluation ie does the qualified teacher value evaluation mechanisms more and therefore employ them in the projects in comparison with academics without teaching quals. and if this is the case is that because they have been 'taught' evaluation methods and skills and therefore their conception is different?

Conditions for successful dissemination:
  1. effective, multi-level leadership & management
  2. climate of readiness for change
  3. availability of resources
  4. comprehensive systems in institutions and funding bodies
  5. funding design
This last point is of interest - it mentions that expectations about the approaches to projects and activities that ought to be adopted are taken from the funding design. so i could say that if there is no  real expectation of evaluation requirements then of course the project applicants are not going to be that stringent. - can link to this in the analysis of phase one.

One final mention in the conclusion (p.65) says 'An important aspect of this study was to identify abd to recommend that learning and teaching grant recipients must be supported and provided with access to experts in educational innovation and evaluation.'


The 2005 Carrick report goes into much more detail and evaluation is mentioned throughout. It also gives recommended strategies for each of the 5 'conditions', at the national, institution and discipline level. when i read through look like they were all implemented at MQ  ie recommending standard requirements for application, including description of which evaluation strategies will be used. 

FOr item 3, the report recommends providing project teams with access to specialist expertise - this could be for evaluation or at the least evaluation resources. Findings on p.55 state that 'those responsible for the project may require assistance in designing an appropriate evaluation process'.also info on this on page 45 (emerging themes)

For item 4 it mentions that support for quality processes, particularly monitoring and evaluation ought to be supplied. Also that evaluation is reported within an evaluation framework. Also, on p.58 there are findings that state that institutions that allocated funders AFTER the projects were finished were evaluated well and regularly and were eventually embedded within an institution. 'Generally, however, experiences quoted int he literature and in case studies evidenced poor quality of evaluation if done at all.' It then went on to explain that frequently dissemination across institutions occurred before it was apparent that there was any value or improvement instudent learning and therefore impact analysis was vital.

The 2011 review of this project didn't seem to add much more in regards to Evaluation other than a recommendation that external evaluation reports be provided publicly. There was a reference to Dow (2008) which i should follow up and could provide some useful data.
An evaluation of the Australia Learning and Teaching Council 2005-2008. Sydney: Australian Learning and Teaching Council.




What can be evaluated?

The following excerpt is taken from: Johannessen, E.M. (2001). Guidelines for evaluation of education projects in emergency situations. The Norwegian Refugee Council. http://toolkit.ineesite.org/toolkit/INEEcms/uploads/1039/Guidelines_for_Evaluation_of_Educ_Projects.PDF


5 WHAT CAN BE EVALUATED?
It is common in many evaluations to focus on the OECD criteria: relevance and fulfillment of objectives, development efficiency, effectiveness, impact and sustainability. Relevance refers to whether the project addresses the needs of the beneficiaries, efficiency to how productively the resources have been used while effectiveness relates to whether the objectives of the project were achieved. Impact is the long-term effect  while sustainability relates to the maintenance of the changes after the project or program has been terminated (Dale 1998, Lewin 1993, Samset 1993).   Sida adds "lessons learned" to the criteria which refer to important general experience the evaluation has yielded. All or some of the criteria are commonly addressed in evaluations of education projects although the terms may be used differently.
Although the concepts seem relevant and reasonable, there are several problems associated
with them. First of all, they need to be defined and specified in terms of identifiable activities when the project is being planned. This is often not the case. Secondly, the concepts "relevance", "impact" and "sustainability" may be difficult to investigate. The project may have been going on for a too short period and/or there is not enough time available to find answers to this type of questions.  Another objection is that the OECD concepts are related to final products and not to description of the processes that have taken place. No projects/programs, whether in research or evaluations, develop neatly in a linear way. The description of the project and the steps we plan in the beginning are always transformed as they are being implemented. Major decisions regarding adjustments and turning points should be described and justified and be part of the evaluation.

I think it will be useful in the next paper when I'm analysing the themes and relating them back to the literature.

The following excerpt is from p12. The author uses the terminology 'Terms of Reference' (ToR) to describe to evaluation plan. This is in line with other articles and will fit into the paper on lit review/gaps in the lit.

Often the terms of reference are too vast and ambitious taken the time and resources into
consideration. Sometimes questions and topics are suggested that are not possible to answer, at least not within the limited time available. As a result the evaluators know already before they start working that it is not feasible to follow the ToR
And again on p18:
8 WHEN TO EVALUATE
Ideally the planning of an evaluation should start parallel with the activities/project. But more
often the opposite is the case; the coordinators start too think about evaluation when the
project  is concluded or about to finish.

Saturday, July 21, 2012

Analysis of Phase One

The research questions for this phase are:
  1. What evaluation forms and approaches have been used in Macquarie funded learning and teaching projects? [easy to answer from the interview data.]
  2. What are the issues and challenges in evaluating learning and teaching projects?
    [This was the original Q11 in the interviews. Briefly, items would include:
    lack of skills - guidelines and support/resources needed
    initial plans too ambitious
    insufficient use of Stakeholders
    insufficient money/budget - to pay for extra help or input when needed i.e. admin support
    no feedback at any stage in the project
    lack of time
    • to plan
    • for reflection/learning
    • too busy with teaching and other demands]
  3. What is understood by evaluation? [can look at the misuse or confusion in terminology.
    For example Evaluation vs. Research and Evaluation vs. Project in terms of both planning and results. There is also some misinterpretation between evaluation and feedback. Also look at Q12 from the interviews.]
  4. How does perception influence how evaluation is carried out in practice? Or, What influences how evaluation is carried out in practice....... [for this i think i need to go through the data and pull out all examples where perception is discussed, albeit implicitly most times. Also need to look for other papers that have done this and see how they have done it. Need to refer back to the theoretical approach of the project as well as the realism paradigm]