Saturday, May 20, 2017

Canadian Evaluation Conference

A Storify of my tweets and favourites from others


My Favourite quotes and comments from the conference:

evaluation is about problem solving not fault finding @AccessAlliance


Challenging assumptions is difficult when hierarchies are involved- people afraid to be truthful says @Mark Stiles

evaluators need to think beyond the report @Nancy Snow

so much more learning can occur when eval introduced early. @TolgaYalkin 

Blog about evaluation findings as a way to share info and show value @Nicholas_Falvo

Role of evaluation - “evaluation is essential for learning” (Uitto et al., 2017) @Astrid Brousselle

The single biggest problem about communication is the illusion that it has occurred @G_KayeP

Funding does not lead to impact. Funding leads to knowledge, which (once applied) leads to impact. @jwmcconnell

Lesson learned in DE: don’t assume that, just because people come together for a project, that they have the same understanding! @StrongRoots_SK

Building eval capacity is as messy as learning, to be transformational we need to help them understand time needed @carolynhoessler

Evaluators work in the space between the thinkers/doubters and the doers/faithful @a_Kallos

Intimacy = in 2 me see @Paul LaCerte


Penny Hawkins summarised the panel presentations (2nd day's keynote) : Misaligned expectations; learning vs. accountability; valuing evaluation. This is a reaffirmation of the main points that were discussed in my Thesis.

This online tool can be used to assess organisational evaluation capacity - developed by @eval_station - Similar to a benchmarking instrument, to be used in group mode, not individual. "The conversation is often of greater value than the answers to the questions"


Met Dr Justin Jagosh, the founder of CARES: Centre for Advancement in Realist Evaluation and Synthesis: https://realistmethodology-cares.org 

Through the website I was able to make contact with Dr Prashanth N S who runs a reading list - articles which all identify how realism is used in practice - which I wanted to read to help with my methodology section of the thesis. Great connection.

https://www.mendeley.com/community/critical-realism-and-realist-evaluation/


~~~~~~~~~~~~~~~~~~

3 Questions I was asked about the poster:

1. Who is the audience
2. What is innovative about it?
3. What impact will it have?

These questions really made me think about the design of the poster/project and I hope will help me when writing it all up.

1. The findings from this study consisted of a set of recommendations, some aimed at those carrying out the evaluation and others aimed at the funders of these small projects, usually at the institution level or even the Faculty or School level. The former group can use some of the strategies to assist them in their evaluative efforts and help them grow their evaluative skills. The latter group may learn more about the needs of the grant awardees and be able to modify expectations and behaviours.
These two groups make up the audience for this project. However, I believe the findings and recommendations could be transferable across other sectors who offer small scale grants for introducing new innovations. 

2. I'm not sure I would describe my research as innovative - but here we go. The evaluation framework which was developed through action research cycles and resulted in an online interactive tool was a great output from this study. A need for such a resource was identified and the format of the final product is quite innovative in its simplicity. 

3. I'm hoping that the impact of this study will come about when people (the identified audiences) start to better evaluate their work, through thoughtful planning and understanding of the available options and requirements. When these small innovations and projects are evaluated, the findings need to be disseminated so that others can learn and improve on them. Thus leading to an improved learning experience for our students.




Wednesday, May 10, 2017

Visiting UCLA

Well I never thought the day would arrive but finally I found myself on the UCLA campus visiting the Chair of Education at UCLA, Professor Christina Christie along with Professor Marvin Alkin, the renowned Evaluation Scholar.

We had just a short time to meet and there was no real agenda but my aim was to present the findings of my PhD and get some feedback from them.

Of course the time flew and I only achieved half of what I had aimed for but nevertheless some learning was certainly had! Marv's brain was as sharp as a razor and he asked me many questions including lots about the PhD experience in Australia. There is a a marked difference between our two countries in that in the USA you take 2 years of classes in your chosen discipline before beginning the Doctoral journey. They then spend a year 'qualifying' (not exactly sure what that means) and then they defend their research proposal and only then can they start the research. So they basically spend 6-7 years full-time doing their PhD whereas I have spent the same amount of time but as a part-time student.

Anyway here are some of the take home messages - or rather questions for further reflection from this visit:

1. Is there really a difference between a project and a program? 

Marv was quite insistent that the two were interchangeable. I actually disagree, the difference may be minor but I think my understanding of the context in which i was writing and researching helps me define it.

A project can be large or small, funded or unfunded and in this project, there is an aim to change something (in my case improve teaching by introducing an innovation, be it technological or methodological). The project plans how the change will occur, implements the change and observes what happens to the output (in my case student learning or student experience). The evaluation of the project can simply observe any change in outcome but could (and should) formatively evaluate the process and reflect on learning that takes place for both the teacher and the student.

A program can also be large or small, funded or unfunded (though most often, its the former of both options). However the aim of a program is usually to provide a service which will result in an outcome - usually social betterment of the participants of the program. The evaluation of the program often aims to judge whether the program has been successful or not, sometimes with the aim of continuing (or not) the program funding, but sometimes to recommend changes in how the program could be run better.

So as you see the two items and their evaluation are very similar and the terms are often used interchangeably. In my write up of the thesis I need to revisit my definitions and perhaps clarify these nuances to make it clear to the examiners. Having said that though, as I wrote this I actually struggled to clarify the differences - more work needed here!

2. Improvement Science

As I explained these nuanced differences to Tina and Marv, they looked knowingly at each other and said "you should read this". They passed over a copy of the latest issue of the Journal - New Directions for Evaluation. This special edition introduces the field of Improvement Science and discusses the overlap and differences between it and evaluation.

In a nutshell, improvement science is another word for formative evaluation leading to incremental change. There is a wealth of literature about this topic so my summary is just the tip of the iceberg, however I really like this idea because using this terminology could help overcome the misconceptions many people have about evaluation. I think the use of this term would certainly appeal to the science, engineering and IT community.

I will incorporate this and info from the journal special issue articles into my discussion chapter as one of the main topics I discuss from my findings is the learning dimension of evaluation and this aligns perfectly with the term 'improvement science'. In the first article in the journal special issue, Christie discusses the similarities with Developmental Evaluation and Patton's response to how they actually differ. Again useful for my thesis as DE was an angle I discussed in the introductory chapter.

So they were the main two things I took from the visit. Both professors were interested (but not surprised) to hear my findings about misconceptions of evaluation and misalignment of praxis. They were also very interested in the online tool I developed though we ran out of time to get any real feedback on that. I have since sent them the link to the tool and asked them to share and comment if they had time.

There was one final thing I had hoped to get from the meeting and that was some suggestions of names of possible examiners, however we did not get to that. I have sent a follow up email but didn't hear back so will chalk that one up to experience and work with the list I currently have.

3. Taking a class

I was invited to attend a class called 'Procedural issues in evaluation'. This was taught by one of Tina's PostDoc Students (Jenn Ho) and combined about 9 graduate students, some of whom were doing Doctoral studies, others were just 'auditing' and others were doing this subject as an elective from other degree programs. I was able to briefly talk about myself and my research - which in itself was a learning process. Over the trip I had occasion to do this numerous times and it certainly got easier to condense my 7 years into a few sentences!

I was guided through this class to an evaluation resource pack provided by the Kellog Foundation (one of the largest philanthropical foundations in the USA offering grants to offer opportunities to children, families and communities to reach their full potential). This resources has some excellent information (pp.6-16) regarding challenging assumptions and recommendations for good evaluation practices. Again, I'll be able to refer to some of this information in my thesis, possibly in the discussion section and maybe even in the introduction.

Another resource/website I heard much about through this class was the Annie E. Casey foundation and particularly theory-of-change. This resource will also come in handy perhaps not for the thesis but certainly for future work as an evaluator (if that happened). 

The aim of the class was to discuss similarities and differences of using a Logic Model and a TOC approach in an evaluation. It was great to be a student again and actually learn from doing rather than just learn from reading.

After the class I spent an hour or so with Jenn discussing evaluation and PhDs and it was a great way to round of the visit to this famous campus.

Saturday, October 8, 2016

Chapter 2

Just looking at Chapter 2, the lit. review.

Some feedback from Marina, try to use some more action words to stress the need for what i did and why.

I then came across a table summarising the articles i used in the lit review - this will be fabulous as an appendix - or perhaps even as part of the intro to chapter 2?

Whilst doing that I recalled and reread an article from Oliver, McBean, Conole & Harvey (2002). They had developed an online toolkit to help evaluators of projects get to grips with evaluation. On the premise that a one size does not fit all:

The toolkit involves six steps, derived from the literature and from research into evaluation practice:
  1. Identification of the audience for the evaluation
  2. Selection of an evaluation question
  3. Choice of an evaluation methodology
  4. Choice of data collection methods
  5. Choice of data analysis methods
  6. Selection of the most appropriate format(s) for reporting the findings to the audience 
(p.200-201).

This is eerily similar to mine. however their tool took 4.5 hours to complete and the authors agree that "Although it could be argued that this reflects the complex demands of evaluation, and thus is not unreasonable, it does mean that the toolkit is ill-suited to small, quick studies." (p.207).

Narrative

compiling some literature and words around the use of narrative

The Narrative Construction of Reality (Bruner, 1991):

"...we organise our experience and our memory of human happenings mainly in the form of narrative—stories, excuses, myths, reasons for doing and not doing..." (Bruner, 1991, p4). 

"Narratives, then, are a version of reality whose acceptability is governed by convention and "narrative necessity" rather than by empirical verification and logical requiredness..." (Bruner, 1991, p4). 

Bruner provides 10 features of a narrative:

1. Narrative diachronicity - an account of events occurring over time.
2. Particularity - context or a particular embodiment
3. Intentional state entailment - so no causality, just the basis for interpreation of what happens
4. Hermeneutic composability - interpretation through intention attribution and back- ground knowledge
5. Canonicity and breach - (not sure how this works...)
6. Referentiality
7?
8. Normativeness
9. Context sensitivity and negotiability
10. Narrative accrual
In this paper Bruner has described how reality is described through narrative principles, how thought is enunciated (through discourse). He concludes that his work has just begun and that he wants to show how narrative can organise 'the structure of human experience' (p.21).

The Value of Narrativity in the Representation of Reality (white, 1980):

"narrative might well be considered a solution to a problem of general human concern, namely, the problem of how to translate knowing into telling" (White, 1980, p.6)

So what is the difference between narrative and discourse? The latter is subjective and identifies an 'ego' whereas the former is objective and is just the logical progression of facts and information that join to tell the account.

"Benveniste shows that certain grammatical forms like the pronoun "I" (and its implicit reference "thou"), the pronominal "indicators" (certain demonstrative pronouns), the adverbial indicators (like "here," "now," "yesterday," "today," "tomorrow," etc.) and, at least in French, certain verb tenses like the present, the present perfect, and the future, find themselves limited to discourse, while narrative in the strictest sense is distinguished by the exclusive use of the third person and of such forms as the preterit and the pluperfect." p.7.

So, i think I am using discourse not narrative if I am using 'I'?


 Next I need to read the work of Albert Bandura? - Actually not much use - more about social cognative therapy...

This article may have something of interest:

Narrative Means to Preventative Ends: A Narrative Engagement Framework for Designing Prevention Interventions
Michelle Miller-Day  & Michael L. Hech
http://doi/10.1080/10410236.2012.762861