Saturday, May 20, 2017

Canadian Evaluation Conference

A Storify of my tweets and favourites from others


My Favourite quotes and comments from the conference:

evaluation is about problem solving not fault finding @AccessAlliance


Challenging assumptions is difficult when hierarchies are involved- people afraid to be truthful says @Mark Stiles

evaluators need to think beyond the report @Nancy Snow

so much more learning can occur when eval introduced early. @TolgaYalkin 

Blog about evaluation findings as a way to share info and show value @Nicholas_Falvo

Role of evaluation - “evaluation is essential for learning” (Uitto et al., 2017) @Astrid Brousselle

The single biggest problem about communication is the illusion that it has occurred @G_KayeP

Funding does not lead to impact. Funding leads to knowledge, which (once applied) leads to impact. @jwmcconnell

Lesson learned in DE: don’t assume that, just because people come together for a project, that they have the same understanding! @StrongRoots_SK

Building eval capacity is as messy as learning, to be transformational we need to help them understand time needed @carolynhoessler

Evaluators work in the space between the thinkers/doubters and the doers/faithful @a_Kallos

Intimacy = in 2 me see @Paul LaCerte


Penny Hawkins summarised the panel presentations (2nd day's keynote) : Misaligned expectations; learning vs. accountability; valuing evaluation. This is a reaffirmation of the main points that were discussed in my Thesis.

This online tool can be used to assess organisational evaluation capacity - developed by @eval_station - Similar to a benchmarking instrument, to be used in group mode, not individual. "The conversation is often of greater value than the answers to the questions"


Met Dr Justin Jagosh, the founder of CARES: Centre for Advancement in Realist Evaluation and Synthesis: https://realistmethodology-cares.org 

Through the website I was able to make contact with Dr Prashanth N S who runs a reading list - articles which all identify how realism is used in practice - which I wanted to read to help with my methodology section of the thesis. Great connection.

https://www.mendeley.com/community/critical-realism-and-realist-evaluation/


~~~~~~~~~~~~~~~~~~

3 Questions I was asked about the poster:

1. Who is the audience
2. What is innovative about it?
3. What impact will it have?

These questions really made me think about the design of the poster/project and I hope will help me when writing it all up.

1. The findings from this study consisted of a set of recommendations, some aimed at those carrying out the evaluation and others aimed at the funders of these small projects, usually at the institution level or even the Faculty or School level. The former group can use some of the strategies to assist them in their evaluative efforts and help them grow their evaluative skills. The latter group may learn more about the needs of the grant awardees and be able to modify expectations and behaviours.
These two groups make up the audience for this project. However, I believe the findings and recommendations could be transferable across other sectors who offer small scale grants for introducing new innovations. 

2. I'm not sure I would describe my research as innovative - but here we go. The evaluation framework which was developed through action research cycles and resulted in an online interactive tool was a great output from this study. A need for such a resource was identified and the format of the final product is quite innovative in its simplicity. 

3. I'm hoping that the impact of this study will come about when people (the identified audiences) start to better evaluate their work, through thoughtful planning and understanding of the available options and requirements. When these small innovations and projects are evaluated, the findings need to be disseminated so that others can learn and improve on them. Thus leading to an improved learning experience for our students.




Wednesday, May 10, 2017

Visiting UCLA

Well I never thought the day would arrive but finally I found myself on the UCLA campus visiting the Chair of Education at UCLA, Professor Christina Christie along with Professor Marvin Alkin, the renowned Evaluation Scholar.

We had just a short time to meet and there was no real agenda but my aim was to present the findings of my PhD and get some feedback from them.

Of course the time flew and I only achieved half of what I had aimed for but nevertheless some learning was certainly had! Marv's brain was as sharp as a razor and he asked me many questions including lots about the PhD experience in Australia. There is a a marked difference between our two countries in that in the USA you take 2 years of classes in your chosen discipline before beginning the Doctoral journey. They then spend a year 'qualifying' (not exactly sure what that means) and then they defend their research proposal and only then can they start the research. So they basically spend 6-7 years full-time doing their PhD whereas I have spent the same amount of time but as a part-time student.

Anyway here are some of the take home messages - or rather questions for further reflection from this visit:

1. Is there really a difference between a project and a program? 

Marv was quite insistent that the two were interchangeable. I actually disagree, the difference may be minor but I think my understanding of the context in which i was writing and researching helps me define it.

A project can be large or small, funded or unfunded and in this project, there is an aim to change something (in my case improve teaching by introducing an innovation, be it technological or methodological). The project plans how the change will occur, implements the change and observes what happens to the output (in my case student learning or student experience). The evaluation of the project can simply observe any change in outcome but could (and should) formatively evaluate the process and reflect on learning that takes place for both the teacher and the student.

A program can also be large or small, funded or unfunded (though most often, its the former of both options). However the aim of a program is usually to provide a service which will result in an outcome - usually social betterment of the participants of the program. The evaluation of the program often aims to judge whether the program has been successful or not, sometimes with the aim of continuing (or not) the program funding, but sometimes to recommend changes in how the program could be run better.

So as you see the two items and their evaluation are very similar and the terms are often used interchangeably. In my write up of the thesis I need to revisit my definitions and perhaps clarify these nuances to make it clear to the examiners. Having said that though, as I wrote this I actually struggled to clarify the differences - more work needed here!

2. Improvement Science

As I explained these nuanced differences to Tina and Marv, they looked knowingly at each other and said "you should read this". They passed over a copy of the latest issue of the Journal - New Directions for Evaluation. This special edition introduces the field of Improvement Science and discusses the overlap and differences between it and evaluation.

In a nutshell, improvement science is another word for formative evaluation leading to incremental change. There is a wealth of literature about this topic so my summary is just the tip of the iceberg, however I really like this idea because using this terminology could help overcome the misconceptions many people have about evaluation. I think the use of this term would certainly appeal to the science, engineering and IT community.

I will incorporate this and info from the journal special issue articles into my discussion chapter as one of the main topics I discuss from my findings is the learning dimension of evaluation and this aligns perfectly with the term 'improvement science'. In the first article in the journal special issue, Christie discusses the similarities with Developmental Evaluation and Patton's response to how they actually differ. Again useful for my thesis as DE was an angle I discussed in the introductory chapter.

So they were the main two things I took from the visit. Both professors were interested (but not surprised) to hear my findings about misconceptions of evaluation and misalignment of praxis. They were also very interested in the online tool I developed though we ran out of time to get any real feedback on that. I have since sent them the link to the tool and asked them to share and comment if they had time.

There was one final thing I had hoped to get from the meeting and that was some suggestions of names of possible examiners, however we did not get to that. I have sent a follow up email but didn't hear back so will chalk that one up to experience and work with the list I currently have.

3. Taking a class

I was invited to attend a class called 'Procedural issues in evaluation'. This was taught by one of Tina's PostDoc Students (Jenn Ho) and combined about 9 graduate students, some of whom were doing Doctoral studies, others were just 'auditing' and others were doing this subject as an elective from other degree programs. I was able to briefly talk about myself and my research - which in itself was a learning process. Over the trip I had occasion to do this numerous times and it certainly got easier to condense my 7 years into a few sentences!

I was guided through this class to an evaluation resource pack provided by the Kellog Foundation (one of the largest philanthropical foundations in the USA offering grants to offer opportunities to children, families and communities to reach their full potential). This resources has some excellent information (pp.6-16) regarding challenging assumptions and recommendations for good evaluation practices. Again, I'll be able to refer to some of this information in my thesis, possibly in the discussion section and maybe even in the introduction.

Another resource/website I heard much about through this class was the Annie E. Casey foundation and particularly theory-of-change. This resource will also come in handy perhaps not for the thesis but certainly for future work as an evaluator (if that happened). 

The aim of the class was to discuss similarities and differences of using a Logic Model and a TOC approach in an evaluation. It was great to be a student again and actually learn from doing rather than just learn from reading.

After the class I spent an hour or so with Jenn discussing evaluation and PhDs and it was a great way to round of the visit to this famous campus.