Showing posts with label useability. Show all posts
Showing posts with label useability. Show all posts

Thursday, November 1, 2012

A Similar Study


Oliver, MacBean, Conole & Harvey, 2002
Using a toolkit to support the evaluation of learning

This research was prompted by rejecting the assumption that users have similar evaluation needs, which raises the problem that practitioners must be aware of the range of evaluation methods available to them, and should be able to select the approach that best addresses their needs (p. 207). This study was primarily concerned with evaluating ICTs and more specifically around the usability of a toolkit developed by JISC in the UK. The web-based toolkit was developed out of a realisation that the current evaluation instruments weren't quite hitting the mark. The new toolkit  came out of a combination of a structured design process (previous toolkit approaches) with rich descriptions of methods and supportive resources (such as the 'cookbook' approach).
The toolkit has a six step approach which can be thought of as a combination of contextual and mechanical or strategic and tactical, with the needs of the stakeholders driving the process.

  1. Identification of the audience for the evaluation
  2. Selection of an evaluation question
  3. Choice of an evaluation methodology 
  4. Choice of data collection methods 
  5. Choice of data analysis methods 
  6. Selection of the most appropriate format(s) for reporting the findings to the audience
The toolkit is organized in three sections, Evaluation planner, Evaluation advisor and Evaluation presenter. The planner section helps the user  define the scope of their evaluation and the output is an evaluation strategy and implementation guide. The Advisor section covers the empirical aspects of evaluation and the presenter section focuses on communicating findings to the stakeholders. Finally an evaluation plan can be printed out.

This study provided formative feedback on the toolkit and summative information about its impact  The latter is not done in this paper as time period did not allow. Observational studies were used which involved participants working through the toolkit in a 'talk-aloud' protocol with an expert on hand to provide support and guidance. Then at the end of the session, users provided further feedback in the form of a focus group. Participants in this stage were all novices to the field of evaluation.
After this first round, modifications were made to the toolkit and then a second round through the form of a workshop was carried out. Participants in this stage were from a range of backgrounds and experiences.
The study showed that working through the toolkit allowed users to design evaluation strategies tailored to their local needs, rather than simply falling back on familiar but inappropriate methods. Some limitations were noted in that the time to work through the toolkit is about 4.5 hrs which may make it unsuitable for smaller projects. The other item of note was that some users felt uncomfortable when presented with an evaluation approach which was uncomfortable to them. A rank ordering of options may be a better solution.

Friday, June 29, 2012

Question 9 - usability and generalisibility

The first part of this question asked whether the results of the evaluation were usable. The answers were interesting. 11 answered yes, two answered no and there were two projects who had no answer. In effect since only 5 projects did project evaluation (and a further 2 did some form of product evaluation) its interesting that 11 answered yes. This indicates that they were thinking along the lines of whether the project result were usable and not the evaluation results (although this may have been fault of the interviewer by not clarifying well enough). Still, another example of crossover in meaning of research and evaluation.
I think it would take a brave person to say there was nothing usable that came of their project. In fact one participant talked at length about this. That there was no room in the academic world for admitting failure.

So back to the 11 yes answers. The five projects that had evaluated were included in this number. Therefore it is good to see that there was in fact benefit to evaluating the projects. Furthermore, there were two projects who did product evaluation and one of these answered yes to the question but one answered no. This is an interesting case (14). The product was evaluation but eventually no one ended up implementing the recommendations for and resources produced in this project. reasons quoted included 'no time', 'no one was interested'. This leads to the question - about stakeholders being consulted with at time of application. When we look at the answer to this question, for this project we find that the project leader did not consult and in fact was not sure about what exactly a stakeholder is.

As for the part on Generalisability, there were a range of answers. Some stated categorically no, but others, when pressed reflected and stated various levels of yes from some to a lot. It was clear though that few had thought about this question, and obviously not reported on it. The other theme that emerges is the difference between content and process. Some were stuck in thinking of their project as a discipline specific thing that couldn't possibly be transposed but others mentioned that their process could be followed by anyone doing similar evaluation and projects, though interestingly few had actually explained the process in the report so how could another follow? Perhaps this level of details could be found in research publications.

Sunday, September 25, 2011

Lots of Links - more on meta-evaluation


Wondering whether I should relook at the mq projects and contact leaders anyway and see what they wrote in their evaluation methods section of application but then would need to ask Barb if I can get a copy of those applications as they would not be publicly available. And then do I tell people that I have looked at their application before I interview them? Would need to disclose this.

Need to add the word metaevaluation to the tile of phd.

Mark, Henry and julnes 2000 reasons for evaluating. This would be more relevant than the ref I used in my proposal which was 1998 and was only editors notes....maybe?

Add the links to the uni west Michigan website, to my blog for future reference. http://www.wmich.edu/evalctr/checklists/about-checklists/

Usability evaluation report , useful for my useability testing some nice references and a way to layout the findings etc.

Papers to be read
Metaevaluation, stufflebeam
evaluation bias and it's cntrol, Scriven:
Metaevaluation in practice, selection and application of criteria. Cooksy & caracelli
Metaevaluation revisited, Scriven
Metaevaluation as a means of examining evaluation influence, Oliver
A basis for determining the adequacy of evaluation designs, sanders & nafziger
Mandated Evaluation: Integrating the Funder-Fundee Relationship into a Model of Evaluation Utilization
Mayhew, F.       2011 May 9. Mandated Evaluation: Integrating the Funder-Fundee Relationship into a Model of Evaluation Utilization. Journal of MultiDisciplinary Evaluation [Online] 7:16. Available: http://survey.ate.wmich.edu/jmde/index.php/jmde_1/article/view/315/315