Showing posts with label perception. Show all posts
Showing posts with label perception. Show all posts

Wednesday, May 10, 2017

Visiting UCLA

Well I never thought the day would arrive but finally I found myself on the UCLA campus visiting the Chair of Education at UCLA, Professor Christina Christie along with Professor Marvin Alkin, the renowned Evaluation Scholar.

We had just a short time to meet and there was no real agenda but my aim was to present the findings of my PhD and get some feedback from them.

Of course the time flew and I only achieved half of what I had aimed for but nevertheless some learning was certainly had! Marv's brain was as sharp as a razor and he asked me many questions including lots about the PhD experience in Australia. There is a a marked difference between our two countries in that in the USA you take 2 years of classes in your chosen discipline before beginning the Doctoral journey. They then spend a year 'qualifying' (not exactly sure what that means) and then they defend their research proposal and only then can they start the research. So they basically spend 6-7 years full-time doing their PhD whereas I have spent the same amount of time but as a part-time student.

Anyway here are some of the take home messages - or rather questions for further reflection from this visit:

1. Is there really a difference between a project and a program? 

Marv was quite insistent that the two were interchangeable. I actually disagree, the difference may be minor but I think my understanding of the context in which i was writing and researching helps me define it.

A project can be large or small, funded or unfunded and in this project, there is an aim to change something (in my case improve teaching by introducing an innovation, be it technological or methodological). The project plans how the change will occur, implements the change and observes what happens to the output (in my case student learning or student experience). The evaluation of the project can simply observe any change in outcome but could (and should) formatively evaluate the process and reflect on learning that takes place for both the teacher and the student.

A program can also be large or small, funded or unfunded (though most often, its the former of both options). However the aim of a program is usually to provide a service which will result in an outcome - usually social betterment of the participants of the program. The evaluation of the program often aims to judge whether the program has been successful or not, sometimes with the aim of continuing (or not) the program funding, but sometimes to recommend changes in how the program could be run better.

So as you see the two items and their evaluation are very similar and the terms are often used interchangeably. In my write up of the thesis I need to revisit my definitions and perhaps clarify these nuances to make it clear to the examiners. Having said that though, as I wrote this I actually struggled to clarify the differences - more work needed here!

2. Improvement Science

As I explained these nuanced differences to Tina and Marv, they looked knowingly at each other and said "you should read this". They passed over a copy of the latest issue of the Journal - New Directions for Evaluation. This special edition introduces the field of Improvement Science and discusses the overlap and differences between it and evaluation.

In a nutshell, improvement science is another word for formative evaluation leading to incremental change. There is a wealth of literature about this topic so my summary is just the tip of the iceberg, however I really like this idea because using this terminology could help overcome the misconceptions many people have about evaluation. I think the use of this term would certainly appeal to the science, engineering and IT community.

I will incorporate this and info from the journal special issue articles into my discussion chapter as one of the main topics I discuss from my findings is the learning dimension of evaluation and this aligns perfectly with the term 'improvement science'. In the first article in the journal special issue, Christie discusses the similarities with Developmental Evaluation and Patton's response to how they actually differ. Again useful for my thesis as DE was an angle I discussed in the introductory chapter.

So they were the main two things I took from the visit. Both professors were interested (but not surprised) to hear my findings about misconceptions of evaluation and misalignment of praxis. They were also very interested in the online tool I developed though we ran out of time to get any real feedback on that. I have since sent them the link to the tool and asked them to share and comment if they had time.

There was one final thing I had hoped to get from the meeting and that was some suggestions of names of possible examiners, however we did not get to that. I have sent a follow up email but didn't hear back so will chalk that one up to experience and work with the list I currently have.

3. Taking a class

I was invited to attend a class called 'Procedural issues in evaluation'. This was taught by one of Tina's PostDoc Students (Jenn Ho) and combined about 9 graduate students, some of whom were doing Doctoral studies, others were just 'auditing' and others were doing this subject as an elective from other degree programs. I was able to briefly talk about myself and my research - which in itself was a learning process. Over the trip I had occasion to do this numerous times and it certainly got easier to condense my 7 years into a few sentences!

I was guided through this class to an evaluation resource pack provided by the Kellog Foundation (one of the largest philanthropical foundations in the USA offering grants to offer opportunities to children, families and communities to reach their full potential). This resources has some excellent information (pp.6-16) regarding challenging assumptions and recommendations for good evaluation practices. Again, I'll be able to refer to some of this information in my thesis, possibly in the discussion section and maybe even in the introduction.

Another resource/website I heard much about through this class was the Annie E. Casey foundation and particularly theory-of-change. This resource will also come in handy perhaps not for the thesis but certainly for future work as an evaluator (if that happened). 

The aim of the class was to discuss similarities and differences of using a Logic Model and a TOC approach in an evaluation. It was great to be a student again and actually learn from doing rather than just learn from reading.

After the class I spent an hour or so with Jenn discussing evaluation and PhDs and it was a great way to round of the visit to this famous campus.

Saturday, August 18, 2012

More thoughts on phase 1 interviews

I've been reading through each of the interviews and marking up the themes that I have and I've come across another theme that is partially related to evaluation but also related to grants and projects.

It seems that more than a few people I interviewed complained about the fact that even though their project created or produced some wonderful outputs, not everyone was willing to take up these new ideas.  For example redesign of some units into online units with chunking up of content meant that there were now more opportunities to interact with the material but the students felt this was too much work and the tutors felt this was too much work (marking). This kind of leads then to the point that if you don't fully involve your stakeholders then the project outcomes can't really meet their needs.

On the other hand another participant talked about this and said that there has to be a balance because when you try to please all of the stakeholders in this way you end up with a substandard product ie you 'dumb it down' to keep everyone happy and some of those novel and 'out there' ideas are lost in translation. This participant said that often it is a case of good timing. If no one takes up your new idea/product as you intended then it may because they are just not ready yet.

That then in turn leads to the conclusion i keep coming back to that evaluation has to be done later down the track (as well as formative and summative in the terms of the project). Often it is too early to say whether this project has been successful or not. You can say whether you met your outcomes or not but you cannot say whether those outcomes have had impact. Neither do people report what didn't work or rather why they may not be being taken up.

I think perhaps also there is some confusion between steering groups and stakeholders. A steering group may well reign you in, but stakeholders will be sure to tell you what they want and why/how they want it. Its one thing to listen to their needs then made an informed decision on how you are going to design your product (say). It is another to present it to a steering group because then you pretty much have to make the changes they request.

Saturday, July 21, 2012

Analysis of Phase One

The research questions for this phase are:
  1. What evaluation forms and approaches have been used in Macquarie funded learning and teaching projects? [easy to answer from the interview data.]
  2. What are the issues and challenges in evaluating learning and teaching projects?
    [This was the original Q11 in the interviews. Briefly, items would include:
    lack of skills - guidelines and support/resources needed
    initial plans too ambitious
    insufficient use of Stakeholders
    insufficient money/budget - to pay for extra help or input when needed i.e. admin support
    no feedback at any stage in the project
    lack of time
    • to plan
    • for reflection/learning
    • too busy with teaching and other demands]
  3. What is understood by evaluation? [can look at the misuse or confusion in terminology.
    For example Evaluation vs. Research and Evaluation vs. Project in terms of both planning and results. There is also some misinterpretation between evaluation and feedback. Also look at Q12 from the interviews.]
  4. How does perception influence how evaluation is carried out in practice? Or, What influences how evaluation is carried out in practice....... [for this i think i need to go through the data and pull out all examples where perception is discussed, albeit implicitly most times. Also need to look for other papers that have done this and see how they have done it. Need to refer back to the theoretical approach of the project as well as the realism paradigm]