Saturday, November 10, 2012

The write/right journal

I'm going to try and publish my literature review but the question i'm now finding hard to answer is which journal. The suggestion was to target one of those journals mentioned in the paper. So for each of these I'm checking the journal requirements, particularly the length of papers accepted since this one is particularly long. Summaries of journals are on this page. Studies in evaluation seems like a good possibility, they have really long articles (over 9000 words) and 4 of the articles in my paper came from this journal. Another possibility is Assessment and Evaluation in Higher Education but they limit articles to between 3000 -5000 words. I'm also interested in the journal of multidisciplinary evaluation which has long articles - this is open source and states they have a quick turn around for feedback. They suggest a limit of 10-12 pages though they do accept longer articles.


Thursday, November 1, 2012

A Similar Study


Oliver, MacBean, Conole & Harvey, 2002
Using a toolkit to support the evaluation of learning

This research was prompted by rejecting the assumption that users have similar evaluation needs, which raises the problem that practitioners must be aware of the range of evaluation methods available to them, and should be able to select the approach that best addresses their needs (p. 207). This study was primarily concerned with evaluating ICTs and more specifically around the usability of a toolkit developed by JISC in the UK. The web-based toolkit was developed out of a realisation that the current evaluation instruments weren't quite hitting the mark. The new toolkit  came out of a combination of a structured design process (previous toolkit approaches) with rich descriptions of methods and supportive resources (such as the 'cookbook' approach).
The toolkit has a six step approach which can be thought of as a combination of contextual and mechanical or strategic and tactical, with the needs of the stakeholders driving the process.

  1. Identification of the audience for the evaluation
  2. Selection of an evaluation question
  3. Choice of an evaluation methodology 
  4. Choice of data collection methods 
  5. Choice of data analysis methods 
  6. Selection of the most appropriate format(s) for reporting the findings to the audience
The toolkit is organized in three sections, Evaluation planner, Evaluation advisor and Evaluation presenter. The planner section helps the user  define the scope of their evaluation and the output is an evaluation strategy and implementation guide. The Advisor section covers the empirical aspects of evaluation and the presenter section focuses on communicating findings to the stakeholders. Finally an evaluation plan can be printed out.

This study provided formative feedback on the toolkit and summative information about its impact  The latter is not done in this paper as time period did not allow. Observational studies were used which involved participants working through the toolkit in a 'talk-aloud' protocol with an expert on hand to provide support and guidance. Then at the end of the session, users provided further feedback in the form of a focus group. Participants in this stage were all novices to the field of evaluation.
After this first round, modifications were made to the toolkit and then a second round through the form of a workshop was carried out. Participants in this stage were from a range of backgrounds and experiences.
The study showed that working through the toolkit allowed users to design evaluation strategies tailored to their local needs, rather than simply falling back on familiar but inappropriate methods. Some limitations were noted in that the time to work through the toolkit is about 4.5 hrs which may make it unsuitable for smaller projects. The other item of note was that some users felt uncomfortable when presented with an evaluation approach which was uncomfortable to them. A rank ordering of options may be a better solution.

Saturday, October 27, 2012

Writing

I'm currently writing up my lit review, which is taking longer than I expected. Initially I had planned to conduct a review of the literature as i progressed through the PhD. However we have discussed wring this up as a paper defining why a study of the nature I proposed for my PhD is needed. ie a rational if you like.
One of my issues with this is that there will be more literature as I go and so this can't be one defining document. Marina suggested I publish the table in which I'm synthesising findings in the papers, on a public website so it can be constantly updated. I like this idea.
On a writing retreat this week I showed my writing group the efforts so far and I have to say I was a bit disappointed with the feedback at first. The general opinion was that I shouldn't try and publish this lit review as most journals would not be interested in it as after all who am I to tell the world..? They suggested I keep writing it up but that it becomes a chapter in the Thesis rather than a journal article. That's actually fine and I'll discuss this with my supervisors next week.

So where I am at right now is trying to weave the themes that have come from my searching, with the relevant items from the articles and then craft a suitable reflection/discussion and conclusion.
Themes I'm working with are:

Formative vs. summative evaluation
Building capacity in evaluation skills
Non-completion of project elements (such as evaluation, reports and the project itself)
Non-usage of evaluation findings
Importance of stakeholder involvement
Inaccurate initial expectations (could be linked to #2)
Importance of planning and defining clear evaluation criteria
Evaluation approaches, models or frameworks



Tuesday, October 23, 2012

Theory-based evaluation


Nesman, Batsche & Hernandez (2007)
Theory-based evaluation of a comprehensive Latino education initiative: An interactive evaluation approach

This paper describes a 5 year initiative to develop, implement and evaluate program(s) that would increase Latino student access to Higher Education. Theory of change and logic models were used to guide the program as these have been previously shown to be most effective when trying to create social change within  comprehensive community initiatives. 'Logic models can also serve as a roadmap for implementers to move from ideas to action by putting components together into a visual framework (Hernandez & Hodges, 2003).' (p.268)
A conceptual model was developed which incorporated context, guiding principles, implementation strategies, outcomes and  evaluation and resulted in a vision statement for the program. The paper also describes the interventions which were to be implemented, and goes on to describe the evaluation approach in more detail. They us an embedded case-study design (Yin, 1984) and mixed methods with a developmental approach which allowed for adaptation over time as the project moved through the varying stages of completion. Key questions were developed associated with each goal from the funding agency ie Process, Impact and Sustainability. One of the key findings under process was that the initial plan 'had been overly ambitious and that it would not be possible to accomplish this large number of interventions with the available resources.' (p.272). This resulting in a paring back of outcomes with some initiatives being prioritised and some being dropped altogether. A finding under Impact was that 'although it would be several years before long term outcomes could be effectively measured, the evaluators developed a tracking system to monitor changes in student outcomes each year.' (p.274). With sustainability, it was felt that strategies within institutions were more likely to be sustained than those relying on collaboration and cross-institutional coordination, unless there was ongoing external support. (p.279)
The authors also wrote about lessons learned from this approach. If theory-based evaluation is to be maximised, it does require  training of program participants on logic model development and theory of change approaches early in the process of implementation. This training can lead to the development of interactive and productive relationships between evaluators and implementers. Adopting a developmental approach was also highly beneficial in this project.

Participatory Evaluation


Lawrenz, (2003). How Can Multi-Site Evaluations be Participatory?

This article takes a look at 5 NSF funded multi-site programs, and asks the question whether they can be considered truly participatory since participatory evaluation requires stakeholder groups to have meaningful input in all phases including evaluation design, defining outcomes and selecting interventions. The authors highlight the fact though that these projects were funded through a competitive process and selection of successful projects was not based on their facilitation of successful (central) program evaluation. The programs investigated were: Local Systemic Change (LSC), Collaboratives for Excellence in Teacher Preparation Program (CETP), the Centers for Learning and Teaching (CLT), and Advanced Technological Education (ATE) and the Mathematics and Science Partnerships (MSP). 
Criteria used to evaluate whether these programs were particiatory in their evaluation practices drew on two frameworks, Cousins and Whitmore's three-dimensional formulizations of collaborative inquiry (1998) and Bourke's participatory evaluation spiral design using 8 key decision points (1998). Four types of decision making questions were used to compare the degree by which each of the individual projects were involved with the program evaluation. These were
1) the type of evaluation information collected, such as defining questions and instruments; 
(2) whether or not to participate; 
(3) what data to provide; and 
(4) how to use the evaluation information.

Findings showed that the programs were spread across a continium from no participation to full participation. So the authors next asked 'in what ways can participation contribute to the overall quality of the evaluation' (p.476). They suggest four specific dimensionsof quality evaluation: 
1) objectivity, 
(2) design of the evaluation effort, 
(3) relationship to site goals and context and 
(4) motivation to provide data.
 and go on to discuss these in relation to the literature. Finally they propose a model for participatory multi-site evaluations which they name a 'negotiated evaluation approach'. The approach consists of three stages, creating the local evaluations (each project), creating the central evaluation team and negotiation and collaboration on the participatory multi-site evaluation.   This enables the  evaluation plan to evolve out of the investigations at the sites and results in instruments and processes which are grounded in reality of the program as it is implemented. 



Sunday, October 21, 2012

The Multiattribute Utility (MAU) approach


Stoner, Meadan, Angell and Daczewitz (2012)
Evaluation of the Parent-implemented Communication Strategies (PiCS) project using the Multiattribute Utility (MAU) approach



The Multiattribute Utility (MAU) approach was used to evaluate a project federally funded by the Institute of Education Sciences. The purpose of the evaluation was a formative one, measuring the extent to which the first two (of 3) goals of the project were being met and was completed after the 2nd year of the project. The project goals were:
(a) develop a home-based naturalistic and visual strategies intervention program that parents can personalize and implement to improve the social-pragmatic communication skills of their young children with disabilities;
(b) evaluate the feasibility, effectiveness, and social validity of the program; and
(c) disseminate a multimedia instructional program, including prototypes of all materials and methods that diverse parents can implement in their home settings.
MAU was chosen as an approach because it was participant oriented, allowing the parents representatives to have a voice in the evaluation. There are 7 steps for a MAU evaluation and each is discussed in the paper.
1.     Identify the purpose of the project
2.     Identify relevant stakeholders (these individuals will help make decisions about the goals and attributes and their importance)
3.     Identify appropriate criteria to measure each goal and attribute
4.     Assign importance weights to the goals and attributes
5.     Assign utility-weighted values to the measurement scales of each attribute
6.     Collect measurable data on each attribute being measured
7.     Perform the technical analysis
An important item to note under item 3 was that it is important to identify essential attributes within each goal area, not to identify a set number of attributes. For this project, 28 attributes were defined by the stakeholders and 25 were actually found to be met through the evaluation.
For this project the MAU approach was found to be in keeping with one of the core values of the project, that of stakeholder involvement. Four primary benefits of using this approach were identified and one concern. The MAU
(a) was based on the core values of the PiCS project;
(b) engaged all stakeholders, including parents, in developing the evaluation framework;
(c) provided a certain degree of objectivity and transparency; and
(d) was comprehensive.
The primary concern was the length of time and labour required to conduct the evaluation. For this reason the authors believe it may not be applicable for evaluating smaller projects. 

Saturday, October 20, 2012

Archipelago approach


Lawrenz & Huffman, (2002)
The Archipelago Approach To Mixed Method Evaluation

This approach likens the different data collection methods to groups of islands; all interconnected ‘underwater’ by the underlying ‘truth’ of the program. This approach has its advantages since it is often difficult to uncover the complete ‘truth’ so using a combination of data types and analysis procedures can facilitate it. The authors quote Green & Caracelli (1997) and their three stances to mixing paradigms, the purist, pragmatic and dialectical stances then attempt to map their archipelago approach to each of these three stances. Thereby opposing Green and Caracelli’s view that the stances are distinct, in fact the authors believe the their metaphor allows for simultaneous consideration and thus provides a framework for integrating designs.

A nationally funded project is evaluated using the archipelago approach to highlight its benefits. Science teachers in 13 high schools across the nation were recruited and consideration was made to the level of mixing the methods in an area that traditionally used a more ‘logical-positivist’ research approach. So three different approaches were used:
1.     Quasi-experimental design – both quantitative and qualitative assessments of achievement. About half of the evaluation effort in terms of time and money were spent on this approach. This was pragmatic as it was included to meet the needs of stakeholders.
2.     A social interactionism approach – gathered data through site visits to schools and classrooms and observations made through open-ended field notes and this data produced narratives descriptions of each site. About one third of the evaluation effort focused on this approach.
3.     A phenomenological study of six of the teachers during implementation of the new curriculum via in-depth interviews.
The archipelago approach extends the idea of triangulation, which is linear to take into account the complex, unequally weighted and multi-dimensional manner. When considering the underlying truth about the effectiveness of the program, achievement was viewed as likely to be the strongest indicator and therefore most effort went into this approach. The learning environment was considered the next strongest indicator and the teacher’s experience as the least.
‘This approach created a way for the authors to preserve some unique aspects of each school while at the same time considering that the schools were linked in some fundamental way’. (p.337)It is hoped that this approach can lead evaluators to think less in either/or ways about mixing methods and more in complex integrative ways.

Another Website?

At my supervision session this week, we have been talking about the possibility of creating a public website to publish the lit review. A website would become a useful resource, allow public comment and become the place where I could show my results particularly if i go on to develop an interactive evaluation instrument.

The question is, can this website act as a publication? I could then refer to the table (which is growing unwieldy for a word doc)  when I write the paper on the literature synthesis. Questions to ponder

Evaluation Capacity Building


Preskill, H., & Boyle, S. (2008). A Multidisciplinary Model of Evaluation Capacity Building. American Journal of Evaluation 29(4), 443–459.


Evaluation Capacity Building (ECB) has become a hot topic of conversation, activity and study in recent times. This paper offers a comprehensive model for designing and implementing ECB activities and processes. The model draws on the fields of evaluation, organisational learning and change and adult learning.
Trigers for ECB usually come from external demands (accountability, environmental change, policies and plans) or internal needs (organisational change, mandate from leadership, perceived lack of knowledge and skills, increased funding, perceived shortage of evaluators, desire to improve programs).
Assumptions are also required and may include: '(a) organization members can learn how to design and conduct evaluations, (b) making learning intentional enhances learning from and about evaluation, and (c) if organization members think evaluatively, their programs will be more effective' (p.446).
And expectations of any ECB effort may include:
  • Evaluations will occur more frequently 
  • Evaluation findings will be used more often for a variety of purposes (including program improvement, resource allocations, development of policies and procedures, current and future programming, and accountability demonstrations)
  • Funders will be more likely to provide new, continuing, and/or increased resources
  • The organization will be able to adapt to changing conditions more effectively
  • Leaders will be able to make more timely and effective decisions
  • The organization will increase its capacity for learning


The paper looks at each of the 10 teaching and learning strategies (from the inner left circle), but then goes on to stress the importance of the design and implementation of any initiative. With Design, the following are of importance: 
  • identifying ECB participants characteristics - need to assess the evaluation competence of potential participants
  • determining available organisational resources - including facilitation and time
  • relevant evaluation, learning and individual and organisational change theories
  • ECB objectives - cognitive, behavioural and affective
** some good references here on improving attitudes towards evaluation and reducing stress and anxiety around evaluation***

There is a section on transfer of learning and acknowledgement that dialogue, reflection and articulating clear expectations for what and how to transfer knowledge and skills are critical for longer term impacts of ECB (p453).

In terms of sustainable practice, the right circle of the model is described in more detail, with each of the 8 elements discussed. And finally the diffusion element is explored. The authors have used a water or reverberations to depict emanation from the organisation.

They conclude that for ECB to be transformational, efforts must be intentional, systematic and sustainable. (p. 457)




Friday, October 12, 2012

A Project Profile Approach to Evaluation

Accountability is a common driver for evaluation particularly as funding bodies strive to obtain measurable gains for their investments, in teacher content knowledge, change in practice and of course student learning. The authors of this study insist that individual project profiles are needed to take into account the unique contextual variables of a project whilst comparing projects across a funded program.

The context for this study was to look at professional development of teachers in the K-12 sector under the  Improving Teacher Quality State Grants Program. Each grant recipient is required to conduct some internal evaluation processes and also to be part of an external evaluation. This paper reports on the design of the latter. Goals were:

(1) how well projects attained their objectives; 
(2) the quality of the PD that was delivered, and 
(3) what outcomes were achieved for teachers and students.

Nine projects were investigated and a profile for each was constructed. The profile consisted of 6 sections: Project background; Project Design; participants and their schools; Quality of implementations; satisfaction survey; Outcomes and Recommendations. In other words do not comapare outcomes alone since the the teachers and the school settings can vary significantly across school sites and therefore outcomes alone do not tell the whole story.

Then the model of using project profiles was compared to a model for evaluating professional development programs (Guskey, 2000). Guskey's hierarchical model includes 6  levels, moving from the simple to the more complex: 
1. Participants' reactions
2. Participants' learning
3. Organization support and change
4. Participants' use of new knowledge and skills
5. Student learning outcomes


The authors mapped their model and found that they needed to modify Guskey's model to make it  more holistic. They created a central core with steps 1,2,4 and 5 each fed by step 3. And then an outer layer of content, context and process. (p.152)

Sunday, September 30, 2012

Case Study


Studying cases allows for obtaining an in-depth understanding (through explaining, exploring, and describing) of complex social phenomena, while retaining the holistic and meaningful characteristics of real-life events (Yin 1994).

KOHLBACHER, F.. The Use of Qualitative Content Analysis in Case Study Research. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, North America, 7, jan. 2006. Available at: <http://www.qualitative-research.net/index.php/fqs/article/view/75/153>. Date accessed: 01 Oct. 2012.
Abstract: This paper aims at exploring and discussing the possibilities of applying qualitative content analysis as a (text) interpretation method in case study research. First, case study research as a research strategy within qualitative social research is briefly presented. Then, a basic introduction to (qualitative) content analysis as an interpretation method for qualitative interviews and other data material is given. Finally the use of qualitative content analysis for developing case studies is examined and evaluated. The author argues in favor of both case study research as a research strategy and qualitative content analysis as a method of examination of data material and seeks to encourage the integration of qualitative content analysis into the data analysis in case study research.

Meta-evaluation of methods for selecting projects


Brandon, Paul R. “A Meta-evaluation of Schools’ Methods for Selecting Site-managed Projects.” Studies In Educational Evaluation 24, no. 3 (1998): 213–228.

A meta-evaluation of 17 schools who apply for funding from a state-wide initiative. The authors were interested in finding out how the schools evaluated which projects were put forward for funding application.
There are three types of evaluative efforts required by schools, needs assessment; project evaluation (when searching for the one to best meet the student and school needs); and summative and formative evaluation after project implementation. This study investigates the second type ie school based evaluative efforts and activities used when selecting educational projects to address their identified needs.

The results showed that the extent to which teachers participated in making decisions about both the process and content of needs assessments was positively related to the validity of these decisions. (p.214)
Evaluation criteria came from using the CIPP approach to evaluation (Stufflebeam, 1983). The four criteria are (a) the extent to which all faculty and staff participated in selecting projects, (b) the extent to which school personnel used the appropriate sources of information, (c) the extent to which the schools compared their preferred projects with other available projects before making their final project selections, and (d) the extent to which the schools considered issues of feasibility such as project cost and ease of implementation. (p.216). A fifth criterion was used, based on the belief that projects are most likely to succeed when they are based on theories of education and have been shown to have succeeded elsewhere (Ellis & Fouts, 1993; Slavin, Karweit, & Madden, 1989). These five criteria then supported the five evaluation questions to be asked: (a) To what extent did school personnel participate in project selection? (b) To what extent did the schools collect information about possible projects from the appropriate types of sources? (c) To what extent did the schools compare their preferred projects with others before making final project selections? (d) To what extent did the schools consider project cost and ease of implementation when selecting projects? (e) To what extent were the selected projects based on theory and supported by empirical findings of previous studies? (p.218)

Two data collection methods were used, a self-report survey questionnaire (for first four questions) and a literature review (for fifth question). Findings: results were encouraging for two of the questions (b and c) and not so encouraging for the remaining three (a, d and e). Three categories of problems that the schools had encountered when implementing their projects were identified.
  • inaccurate estimates of project costs.
  • misjudging the managerial, administrative, or logistical requirements of the projects.
  • underestimating the level of staff, parent, or community understanding or~motivation required for successful project implementation.
[useful when writing about my findings from Phase 1]

And the final question highlighted that empirical evidence about project success was not found for about half of the schools. This could be explained because school personnel often know little about proper use of research findings and this should not be the case with HE project proposals.
And Selecting projects is a task that is added to schools' full schedules, and, typically, few faculty and staff are available or willing to participate. [same could be said of HE] However authors recommend that schools should be shown the advantages of allowing as many of their staff and faculty as feasible to participate in project selection. Assistance of this sort would improve the chance that the best projects would be identified to meet needs and would help ensure that project funding is well spent.


An evaluation framework for sustaining the impact of educational development


Hashimoto, Kazuaki, Hitendra Pillay, and Peter Hudson. “An Evaluation Framework for Sustaining the Impact of Educational Development.” Studies In Educational Evaluation 36, no. 3 (2010) 101–110.

The context of this paper is international aid agencies funding of educational development projects in recipient countries and their apparent ineffectiveness. The authors were interested in overcoming donor agencies internal compliance requirements by looking how local evaluation capacity could be developed and also how developments could continue to be sustained after project completion. Although this context is not applicable to the HE sphere, the same could be said of external funding agents vs internal projects.

The authors define process evaluation (quote: DAC Network on Development Evaluation. (2008). Evaluating development cooperation. OECD DAC Network on Development Evaluation. Retrieved January 6, 2010, from http://www.oecd.org/dataoecd/3/56/41069878.pdf.) And state the importance of process evaluation being the involvement of the participants in making decisions on a project such as terminating a project if necessary (p.102.). The authors quote Miyoshi and Stemming (2008) in that most studies on evaluation with participatory approaches are not underpinned by evaluation theories but are method-oriented.

So an Egyptian project was used as a case study (see previous post) and there were two research questions: (1) how can an entire educational development project be evaluated? and (2) how can the capacity development in educational reform be evaluated? Participants included six different groups of stakeholders: funding body, local admin, researchers, teachers, parents and students. The analytic technique used was pattern matching (Yin, 2003, p. 116) to enhance its internal validity. There were three emergent themes to the study, context, outcome and process evaluation.

Outcome evaluation:

  • assessing outcomes is necessary for determining the success of an educational reform project.
  • Outcome evaluation should include local participant involve- ment for evaluating a project since they are the end users.
  • Local stakeholders should not be seen as informants or discussants but rather as evaluators working jointly with aid agencies so they can appreciate the success and failure of achieving the objectives.
  • Results supported the use of an external evaluator who in collaboration with the internal evaluators of the project can undertake a macro level evaluation of the project


Context Evaluation

  • context evaluation assesses local needs and problems to be addressed, cultural, political and financial issues, assists to design a project and sets objectives before the initiation of an educational project. 
  • more local stakeholders such as representatives of local community are needed to join the evaluation to make their voice heard because after all they are the beneficiaries. 
  • This engagement of various stakeholders in dialogues throughout the project from the project design phase may enable their opinions and interests to be considered for designing and implementing a more effective project (House & Howe, 2000).
Process evaluation


  • There was a need for adopting a systematic participatory evaluation approach involving individuals and groups at the different levels of an educational system, which was central to process evaluation. 
  • the linchpin of a sound process evaluation is employing skilled people
  • the practice and culture of process evaluation should be nurtured during the life of educational projects and be institutionalized locally. This has the potential to sustain the impact of projects. 

Conclusion - conventional monitoring and evaluation practices do not have the ability to sustain a project beyond its lifetime. And that 'paradigms should shift from outcome-focused evaluation currently dominated by international donor agencies to process evaluation conducted largely by local participants but also supported by donor agencies. ' (p109)

Framework outlined in picture below (from p.108):
Other articles that follows this line of thinking:
 Donnelly, John. Maximising participation in international community-level project evaluation: a strength-based approach. [online]. Evaluation Journal of Australasia, v.10, no.2, 2010: 43-50. Availability:<http://search.informit.com.au/fullText;dn=201104982;res=APAFT> ISSN: 1035-719X. [cited 01 Oct 12].

Challenging times for evaluation of international development assistance

M Nagao - Evaluation Journal of Australasia, 2006 - aes.asn.au







Saturday, September 29, 2012

Case study methodology

Hashimoto, Pillay, and Hudson, (2010) “An Evaluation Framework for Sustaining the Impact of Educational Development.”

p.103 This study adopted a case-study methodology as it can be separated out for research in terms of time, place, or some physical boundaries (Creswell, 2008). Separating the case was critical as there are ongoing education reform projects happening in Egypt. The procedure was guided by Yin’s (2003) model with the sequenced five steps. These steps were: (i) developing research questions; (ii) identifying the research assumptions; (iii) specify- ing research unit(s) of analysis; (iv) the logical linking of data to the assumptions; and (v) determining the criteria and interpreting the findings. The case study is convenient to illuminate the contextually-embedded evaluation process using multiple data sources. This study used three data sources to triangulate the data. They were: (i) the JICA evaluation reports on the project, (ii) a survey questionnaire, and (iii) interviews with stakeholders. To unravel the two main research questions, the following three sub-research questions were applied to these three data sources: (1) Who should be involved in the evaluation process? (2) When should the evaluation be conducted? (3) Why should the evaluation be conducted? These three questions provides a holistic understand- ing of the key players involved in making decisions, the rationale for the timing of the evaluation activities (investigates the assumption underpinning such timing) and, the justification of the evaluation actions. Such a holistic approach is consistent with Burns (2000) argument that case studies should consider constructs from multiple perspectives in order to develop a deeper and more complete understanding of the constructs.

Saturday, August 25, 2012

Effective Dissemination


Deborah Southwell, Deanne Gannaway, Janice Orrell, Denise Chalmers & Catherine Abraham (2010): Strategies for effective dissemination of the outcomes of teaching and learning projects, Journal of Higher Education Policy and Management, 32:1, 55-67http://dx.doi.org/10.1080/13600800903440550

Came across this article and found some useful information in it, and its related ALTC (2005) report with same name. The report has been reviewed in 2011 and is now in the OLT repository, here

Summary:

The paper looks at a range of Australian and International funding bodies who support L&T projects and asks the question 'How can a national institute for Learning and teaching in higher education maximise the likelihood of achieving large-scale change in teaching and learning across the Australian higher education sector, especially thorough its grants program and resource repository and clearing house?'(p.58)
Now whilst this covers national schemes, there are some important findings that could be applied to smaller scale projects and schemes such as the one I'm looking at and there are also some interesting findings on evaluation. Furthermore the ALTC report identifies some other international funding bodies which i could follow up on in terms of evaluation requirements etc.
Questions from the guiding framework of the project team are included and a couple would be relevant to my study:
What are the influences on and factors affecting a teacher's decision to make use of a project pr process that is being scaled up? What local and external factors facilitate or create barriers affecting the teachers decisions? What is the relationship between the development of local capacity and the quality of external reform structures? (p.59)

The projects which were selected came from a range of locations ut focussed more on projects that sought to change teaching and learning processes, practices and beliefs rather than on projects that focussed on developing products. (p60.)

Some Findings:
Two items of note were:
1. Initiation of an innovation - Intended users of an innovation need to be engaged very early int he planning stages of the innovation in an endeavour to ensure adoption and take up of ideas later on.(p.61) This can be extrapolated to my project in terms of getting stakeholders involved from the early stages and to getting uptake of evaluation findings.
2. Implementation, embedding and upscaling of innovation - one influence on this comes from the personal conception of teaching (p.62). I'm suggesting that the same is true for evaluation. Also, Academics with teaching qualifications appear to be more open to investigation of alternate curriculum and teaching approaches. (Lueddeke, 2003). It would be interesting to test and see if this is the case with evaluation ie does the qualified teacher value evaluation mechanisms more and therefore employ them in the projects in comparison with academics without teaching quals. and if this is the case is that because they have been 'taught' evaluation methods and skills and therefore their conception is different?

Conditions for successful dissemination:
  1. effective, multi-level leadership & management
  2. climate of readiness for change
  3. availability of resources
  4. comprehensive systems in institutions and funding bodies
  5. funding design
This last point is of interest - it mentions that expectations about the approaches to projects and activities that ought to be adopted are taken from the funding design. so i could say that if there is no  real expectation of evaluation requirements then of course the project applicants are not going to be that stringent. - can link to this in the analysis of phase one.

One final mention in the conclusion (p.65) says 'An important aspect of this study was to identify abd to recommend that learning and teaching grant recipients must be supported and provided with access to experts in educational innovation and evaluation.'


The 2005 Carrick report goes into much more detail and evaluation is mentioned throughout. It also gives recommended strategies for each of the 5 'conditions', at the national, institution and discipline level. when i read through look like they were all implemented at MQ  ie recommending standard requirements for application, including description of which evaluation strategies will be used. 

FOr item 3, the report recommends providing project teams with access to specialist expertise - this could be for evaluation or at the least evaluation resources. Findings on p.55 state that 'those responsible for the project may require assistance in designing an appropriate evaluation process'.also info on this on page 45 (emerging themes)

For item 4 it mentions that support for quality processes, particularly monitoring and evaluation ought to be supplied. Also that evaluation is reported within an evaluation framework. Also, on p.58 there are findings that state that institutions that allocated funders AFTER the projects were finished were evaluated well and regularly and were eventually embedded within an institution. 'Generally, however, experiences quoted int he literature and in case studies evidenced poor quality of evaluation if done at all.' It then went on to explain that frequently dissemination across institutions occurred before it was apparent that there was any value or improvement instudent learning and therefore impact analysis was vital.

The 2011 review of this project didn't seem to add much more in regards to Evaluation other than a recommendation that external evaluation reports be provided publicly. There was a reference to Dow (2008) which i should follow up and could provide some useful data.
An evaluation of the Australia Learning and Teaching Council 2005-2008. Sydney: Australian Learning and Teaching Council.