Saturday, June 30, 2012

Question 10 - reporting strategies and feedback

The following reporting strategies were used:

15 final report to funding body
6 reports or products to audience
5 interim reports
5 conference presentations
4 reports to stakeholders
2 journal articles
2 reports to department (meetings)
2 faculty L&T
1 minutes of project meetings to stakeholders
1 report to faculty executive 


When asked how their reports were received, 10 out of the 15 participants reported receiving no feedback other than thank-you. This provoked a range of reactions from indifference to relief to annoyance. I think it was clear though that more than a tokenistic feedback is required. Even if it was a follow up by someone to ask about the project - show interest in how it has developed or moved on with.


One participant mentioned that it would be good if they could receive help with publishing in the L&T field rather than their discipline field - which of course they can do easily. Considering a number of people mentioned conference and journal articles then perhaps this is a service the LTC could follow up with?

Friday, June 29, 2012

Question 9 - usability and generalisibility

The first part of this question asked whether the results of the evaluation were usable. The answers were interesting. 11 answered yes, two answered no and there were two projects who had no answer. In effect since only 5 projects did project evaluation (and a further 2 did some form of product evaluation) its interesting that 11 answered yes. This indicates that they were thinking along the lines of whether the project result were usable and not the evaluation results (although this may have been fault of the interviewer by not clarifying well enough). Still, another example of crossover in meaning of research and evaluation.
I think it would take a brave person to say there was nothing usable that came of their project. In fact one participant talked at length about this. That there was no room in the academic world for admitting failure.

So back to the 11 yes answers. The five projects that had evaluated were included in this number. Therefore it is good to see that there was in fact benefit to evaluating the projects. Furthermore, there were two projects who did product evaluation and one of these answered yes to the question but one answered no. This is an interesting case (14). The product was evaluation but eventually no one ended up implementing the recommendations for and resources produced in this project. reasons quoted included 'no time', 'no one was interested'. This leads to the question - about stakeholders being consulted with at time of application. When we look at the answer to this question, for this project we find that the project leader did not consult and in fact was not sure about what exactly a stakeholder is.

As for the part on Generalisability, there were a range of answers. Some stated categorically no, but others, when pressed reflected and stated various levels of yes from some to a lot. It was clear though that few had thought about this question, and obviously not reported on it. The other theme that emerges is the difference between content and process. Some were stuck in thinking of their project as a discipline specific thing that couldn't possibly be transposed but others mentioned that their process could be followed by anyone doing similar evaluation and projects, though interestingly few had actually explained the process in the report so how could another follow? Perhaps this level of details could be found in research publications.

Question 8 - review of evaluation plan

This set of questions asked whether the evaluation plan was reviewed and if so what was the benefit of this review.
From the 6 projects who previously said they had a plan, only 3 of them said they had it reviewed. However I have to say that these answers were a little misguided. What they really meant was that they got feedback on their project at some stage. One mentioned the steering group, another mentioned student and colleague's feedback on the product and the third mentioned fedback from the application review on who should be consulted during implementation of the project.
So in effect non of the evaluation plans was really reviewed before submission and just one mentioned formative checking in with the steering group.

Saturday, June 23, 2012

Question 7 - Key evaluation questions

I think this set of questions - 'Were Key Evaluation questions stated?' and if yes could they be answered adequately, did not produce any fruitful data. Out of the 15 projects, 13 said no and for the two who said yes, there was no evidence of these questions in the aplication or the report.

This could suggest that there is no clear conception of what evaluation is. Though most people interviewed could talk about the topic, when put on the spot with this question, the lack of evidence points clearly towards the fact that evaluation is not well done or at least not clearly understood or maybe that there are different perceptions of evaluation. In other words there is no 'one size fits all'.
Interesting quote " although not stated, it is understood (implicit)". But is it??


Other reasons for there being no key evaluation questions stated could be due to the inherent nature of action research and developmental evaluation whereby the questions are developed as you go.

Q6 - did the evaluation plan go to plan

This question asked whether an evaluation plan was written and presented in the application or the report. 9 interviewees said no and six said yes. Interesting that there were 7 people who originally stated that they evaluated (2 of which were evaluations of a product rather than the project) and yet only 6 had a plan. In fact two of the 6 yes's wrote a plan in their application but in actual fact they didn't end up evaluating. So to compare this:



Evaluated the project?
Had an evaluation plan?
1
N
N
2
N
(in application)
3
N
Y (in application)
4
Y
N
5
Y
N
6
Y
Y
7
Y
N
8
N
N
9
N (evaluated the product)
Y
10
Y
Y
11
N
N
12
N
N
13
N (evaluated the product)
Y
14
N
N
15
N
N


Looking at the comments people made about whether the evaluation plan went to plan, there were two major themes running through the answers. The main one was that most people were thinking about the project – did IT run as planned, and not the evaluation (in some instances it could have been how I phrased the question). There appeared to be no linkage between having an evaluation plan and checking if it ran as planned ie no formative mechanisms or critical reflection checks along the way. The second theme was time. Most of those who talked about why the plan did not run as planned indicated that due to factors beyond their control or unplanned circumstances, the time it took to do the project/study meant either there was no time to do the evaluation or the project just ran out of time in general.
This could indicate that more time has to be built in to project planning for unexpected happenings that are bound to crop up.

Friday, June 22, 2012

Q5 - Stakeholders (primary and secondary)

This question asked whether stakeholder and study audiences were identified (and hence consulted - though this wasn't asked but implied) and furthermore whether it was both primary and secondary stakeholders.
The aim of this set of questions was to see whether there was any understanding of the terminology and also the relevance of identifying these groups.
There is a good paper on working with Evaluation Stakeholders (Bryson, Patton & Bowman, 2011) which offers a rationale, step-wise approach and toolkit. They define stakeholders as 'individuals, groups or organisations that can affect or are affected by an evaluation process and/ or its findings. (p.1)

In Stufflebeam's 1974 Seminal paper on Meta-Evaluation (republished in 2011), one of his eleven essential criteria for technical adequacy of evaluations, is Relevance: 'This criterion concerns whether the findings respond to the purposes of the evaluation. What are the audiences? What information do they need? To what extent is the evaluation design responsive to the stated purposes of the study and the information requests made by the audiences? The concern for relevance is crucial if the findings are to have more than academic appeal and if they are to be used by the intended audiences. Application of the criterion of relevance requires that the evaluation audiences and purposes be specified. Such specifications essentially result in the questions to be answered. Relevance is determined by comparing each datum to be gathered with the questions to be answered.'



Themes from the interviews:
  • Six projects mentioned students
  • Some confusion about this topic
  • Not thought of or implicitly stated in the application.

Wednesday, June 20, 2012

Q4 purpose and scope of evaluation

There were two themes that came out of this question. Most people did not detail the purpose and scope of the evaluation in the application or final report. (2 out of 15 - 27%). Most of these people explained why they didn't and that was because they felt it wasn't required, there was no importance laid on this. However many people [how many?] said they did scope the evaluation, (though there was no evidence in the report) which perhaps indicates that there was some confusion over what this question meant.

The follow up to this question asked how the information from the evaluation would be (planned) or was used. The aim was to see if there was any correlation to those that scoped the evaluation and how the evaluation results were actually used. Themes from the responses:

  • To produce guidelines and / or recommendations
  • Publications and conferences
  • Unsure
  • Feed into a larger grant

Monday, June 18, 2012

Question 3 - frameworks and shortcomings

These two questions ask which approach, framework or method was used to evaluate. There is limited data since only 7 of the 15 projects really did any kind of evaluation.

From these 7, 3 didn’t use any particular framework (could infer that they didn’t know any names of frameworks or didn't appreciate the benefit of using such).
The other 4 named frameworks – two created their own (but these were not rigorous or tested)
one cited CICTO which is used for software evaluation (not project eval)
and one cited PAR which is not an evaluation approach but a research approach – could say that this is developmental evaluation and could also infer that there is some confusion over what is difference between evaluation and research?


When asked about shortcomings of frameworks, the following themes were evident in answers:

  1. External eyes would be helpful 
  2. Formative and summative is imperative
  3. Don’t be too ambitious – time is limited
  4. Need a good project manager

Sunday, June 17, 2012

Q2 - skills of the evaluator

A range of evaluators were identified though mostly it was team approach. There were three themes that came out when asked whether the evaluator had sufficient skills in evaluation and whether they were able to be objective.

  1. evaluation skills were lacking in the team
  2. external pair of eyes may have been beneficial
  3. hard to be truly objective 

This leads to findings from a later question that more support resources around evaluation are needed.

Saturday, June 16, 2012

Question 1 - did you evaluate

I first looked at the question B1 - If no evaluation was carried out, why not?  From the 15 interviews, only 5 did project evaluation (or said/thought they did). From the 10 nos, I would say four of those did product evaluation in some format and one did some informal evaluation as part of an action research project which by its nature has reflection and redesign built in.
The five who said yes they did evaluation, three did evaluation as part of an action research approach and two did product evaluation.
To summarise:

Key:
Prod/Prog – evaluated a product or program which was the outcome of the project
AR/DE – used an action research approach so did a kind of developmental evaluation


Did you evaluate?
type
1
No

2
No

3
No

4
Yes
DE
5
Yes
prog
6
Yes
AR
7
Yes
PAR
8
No
prod
9
No
prog
10
Yes
prog
11
No

12
No

13
No
Prod
14
No
Prog
15
No




There were three main themes that came through in the answers:
  1. Different concepts of what is meant by evaluation – confusion over product, process and outcomes. Not a universal understanding of what evaluation is
  2. Action research and developmental evaluation – informally evaluating whilst not actually writing formal reports etc
  3. Lack of money and time to do evaluation


Monday, June 11, 2012

Trawling the Data

I've been playing with Leximancer for a few weeks now. To begin, I just loaded all of the transcripts into it to get a very complex themes and concept map. There was nothing strikingly interesting from that because there was lots of irrelevant information (ie included all of the questions too).

So now I am trying to extract the answers to each of the questions and anlyse them separately. This is proving really difficult because I was not consistent in my questioning. I often let the interviewee ramble off topic and did not get all of the questions answered adequately.

However going back through the data time and time again is good as it gives me a feel for what is being said. I'm not confident how to use Leximancer well, so i have supplemented this with two things. Firstly i have uploaded the transcript into Wordle to see what words come about. And secondly i have gone through the summaries i made of the interviews and looked to see what my themes were. By triangulating this evidence i hope to come up with something relevant.


The real learning in this is for me though. My questioning technique was really not consistent and next time i will read all questions even if i am repeating and say them clearly and not try and answer for the interviewee!