Saturday, August 25, 2012

Effective Dissemination


Deborah Southwell, Deanne Gannaway, Janice Orrell, Denise Chalmers & Catherine Abraham (2010): Strategies for effective dissemination of the outcomes of teaching and learning projects, Journal of Higher Education Policy and Management, 32:1, 55-67http://dx.doi.org/10.1080/13600800903440550

Came across this article and found some useful information in it, and its related ALTC (2005) report with same name. The report has been reviewed in 2011 and is now in the OLT repository, here

Summary:

The paper looks at a range of Australian and International funding bodies who support L&T projects and asks the question 'How can a national institute for Learning and teaching in higher education maximise the likelihood of achieving large-scale change in teaching and learning across the Australian higher education sector, especially thorough its grants program and resource repository and clearing house?'(p.58)
Now whilst this covers national schemes, there are some important findings that could be applied to smaller scale projects and schemes such as the one I'm looking at and there are also some interesting findings on evaluation. Furthermore the ALTC report identifies some other international funding bodies which i could follow up on in terms of evaluation requirements etc.
Questions from the guiding framework of the project team are included and a couple would be relevant to my study:
What are the influences on and factors affecting a teacher's decision to make use of a project pr process that is being scaled up? What local and external factors facilitate or create barriers affecting the teachers decisions? What is the relationship between the development of local capacity and the quality of external reform structures? (p.59)

The projects which were selected came from a range of locations ut focussed more on projects that sought to change teaching and learning processes, practices and beliefs rather than on projects that focussed on developing products. (p60.)

Some Findings:
Two items of note were:
1. Initiation of an innovation - Intended users of an innovation need to be engaged very early int he planning stages of the innovation in an endeavour to ensure adoption and take up of ideas later on.(p.61) This can be extrapolated to my project in terms of getting stakeholders involved from the early stages and to getting uptake of evaluation findings.
2. Implementation, embedding and upscaling of innovation - one influence on this comes from the personal conception of teaching (p.62). I'm suggesting that the same is true for evaluation. Also, Academics with teaching qualifications appear to be more open to investigation of alternate curriculum and teaching approaches. (Lueddeke, 2003). It would be interesting to test and see if this is the case with evaluation ie does the qualified teacher value evaluation mechanisms more and therefore employ them in the projects in comparison with academics without teaching quals. and if this is the case is that because they have been 'taught' evaluation methods and skills and therefore their conception is different?

Conditions for successful dissemination:
  1. effective, multi-level leadership & management
  2. climate of readiness for change
  3. availability of resources
  4. comprehensive systems in institutions and funding bodies
  5. funding design
This last point is of interest - it mentions that expectations about the approaches to projects and activities that ought to be adopted are taken from the funding design. so i could say that if there is no  real expectation of evaluation requirements then of course the project applicants are not going to be that stringent. - can link to this in the analysis of phase one.

One final mention in the conclusion (p.65) says 'An important aspect of this study was to identify abd to recommend that learning and teaching grant recipients must be supported and provided with access to experts in educational innovation and evaluation.'


The 2005 Carrick report goes into much more detail and evaluation is mentioned throughout. It also gives recommended strategies for each of the 5 'conditions', at the national, institution and discipline level. when i read through look like they were all implemented at MQ  ie recommending standard requirements for application, including description of which evaluation strategies will be used. 

FOr item 3, the report recommends providing project teams with access to specialist expertise - this could be for evaluation or at the least evaluation resources. Findings on p.55 state that 'those responsible for the project may require assistance in designing an appropriate evaluation process'.also info on this on page 45 (emerging themes)

For item 4 it mentions that support for quality processes, particularly monitoring and evaluation ought to be supplied. Also that evaluation is reported within an evaluation framework. Also, on p.58 there are findings that state that institutions that allocated funders AFTER the projects were finished were evaluated well and regularly and were eventually embedded within an institution. 'Generally, however, experiences quoted int he literature and in case studies evidenced poor quality of evaluation if done at all.' It then went on to explain that frequently dissemination across institutions occurred before it was apparent that there was any value or improvement instudent learning and therefore impact analysis was vital.

The 2011 review of this project didn't seem to add much more in regards to Evaluation other than a recommendation that external evaluation reports be provided publicly. There was a reference to Dow (2008) which i should follow up and could provide some useful data.
An evaluation of the Australia Learning and Teaching Council 2005-2008. Sydney: Australian Learning and Teaching Council.




What can be evaluated?

The following excerpt is taken from: Johannessen, E.M. (2001). Guidelines for evaluation of education projects in emergency situations. The Norwegian Refugee Council. http://toolkit.ineesite.org/toolkit/INEEcms/uploads/1039/Guidelines_for_Evaluation_of_Educ_Projects.PDF


5 WHAT CAN BE EVALUATED?
It is common in many evaluations to focus on the OECD criteria: relevance and fulfillment of objectives, development efficiency, effectiveness, impact and sustainability. Relevance refers to whether the project addresses the needs of the beneficiaries, efficiency to how productively the resources have been used while effectiveness relates to whether the objectives of the project were achieved. Impact is the long-term effect  while sustainability relates to the maintenance of the changes after the project or program has been terminated (Dale 1998, Lewin 1993, Samset 1993).   Sida adds "lessons learned" to the criteria which refer to important general experience the evaluation has yielded. All or some of the criteria are commonly addressed in evaluations of education projects although the terms may be used differently.
Although the concepts seem relevant and reasonable, there are several problems associated
with them. First of all, they need to be defined and specified in terms of identifiable activities when the project is being planned. This is often not the case. Secondly, the concepts "relevance", "impact" and "sustainability" may be difficult to investigate. The project may have been going on for a too short period and/or there is not enough time available to find answers to this type of questions.  Another objection is that the OECD concepts are related to final products and not to description of the processes that have taken place. No projects/programs, whether in research or evaluations, develop neatly in a linear way. The description of the project and the steps we plan in the beginning are always transformed as they are being implemented. Major decisions regarding adjustments and turning points should be described and justified and be part of the evaluation.

I think it will be useful in the next paper when I'm analysing the themes and relating them back to the literature.

The following excerpt is from p12. The author uses the terminology 'Terms of Reference' (ToR) to describe to evaluation plan. This is in line with other articles and will fit into the paper on lit review/gaps in the lit.

Often the terms of reference are too vast and ambitious taken the time and resources into
consideration. Sometimes questions and topics are suggested that are not possible to answer, at least not within the limited time available. As a result the evaluators know already before they start working that it is not feasible to follow the ToR
And again on p18:
8 WHEN TO EVALUATE
Ideally the planning of an evaluation should start parallel with the activities/project. But more
often the opposite is the case; the coordinators start too think about evaluation when the
project  is concluded or about to finish.

Saturday, August 18, 2012

More thoughts on phase 1 interviews

I've been reading through each of the interviews and marking up the themes that I have and I've come across another theme that is partially related to evaluation but also related to grants and projects.

It seems that more than a few people I interviewed complained about the fact that even though their project created or produced some wonderful outputs, not everyone was willing to take up these new ideas.  For example redesign of some units into online units with chunking up of content meant that there were now more opportunities to interact with the material but the students felt this was too much work and the tutors felt this was too much work (marking). This kind of leads then to the point that if you don't fully involve your stakeholders then the project outcomes can't really meet their needs.

On the other hand another participant talked about this and said that there has to be a balance because when you try to please all of the stakeholders in this way you end up with a substandard product ie you 'dumb it down' to keep everyone happy and some of those novel and 'out there' ideas are lost in translation. This participant said that often it is a case of good timing. If no one takes up your new idea/product as you intended then it may because they are just not ready yet.

That then in turn leads to the conclusion i keep coming back to that evaluation has to be done later down the track (as well as formative and summative in the terms of the project). Often it is too early to say whether this project has been successful or not. You can say whether you met your outcomes or not but you cannot say whether those outcomes have had impact. Neither do people report what didn't work or rather why they may not be being taken up.

I think perhaps also there is some confusion between steering groups and stakeholders. A steering group may well reign you in, but stakeholders will be sure to tell you what they want and why/how they want it. Its one thing to listen to their needs then made an informed decision on how you are going to design your product (say). It is another to present it to a steering group because then you pretty much have to make the changes they request.