Saturday, November 10, 2012

The write/right journal

I'm going to try and publish my literature review but the question i'm now finding hard to answer is which journal. The suggestion was to target one of those journals mentioned in the paper. So for each of these I'm checking the journal requirements, particularly the length of papers accepted since this one is particularly long. Summaries of journals are on this page. Studies in evaluation seems like a good possibility, they have really long articles (over 9000 words) and 4 of the articles in my paper came from this journal. Another possibility is Assessment and Evaluation in Higher Education but they limit articles to between 3000 -5000 words. I'm also interested in the journal of multidisciplinary evaluation which has long articles - this is open source and states they have a quick turn around for feedback. They suggest a limit of 10-12 pages though they do accept longer articles.


Thursday, November 1, 2012

A Similar Study


Oliver, MacBean, Conole & Harvey, 2002
Using a toolkit to support the evaluation of learning

This research was prompted by rejecting the assumption that users have similar evaluation needs, which raises the problem that practitioners must be aware of the range of evaluation methods available to them, and should be able to select the approach that best addresses their needs (p. 207). This study was primarily concerned with evaluating ICTs and more specifically around the usability of a toolkit developed by JISC in the UK. The web-based toolkit was developed out of a realisation that the current evaluation instruments weren't quite hitting the mark. The new toolkit  came out of a combination of a structured design process (previous toolkit approaches) with rich descriptions of methods and supportive resources (such as the 'cookbook' approach).
The toolkit has a six step approach which can be thought of as a combination of contextual and mechanical or strategic and tactical, with the needs of the stakeholders driving the process.

  1. Identification of the audience for the evaluation
  2. Selection of an evaluation question
  3. Choice of an evaluation methodology 
  4. Choice of data collection methods 
  5. Choice of data analysis methods 
  6. Selection of the most appropriate format(s) for reporting the findings to the audience
The toolkit is organized in three sections, Evaluation planner, Evaluation advisor and Evaluation presenter. The planner section helps the user  define the scope of their evaluation and the output is an evaluation strategy and implementation guide. The Advisor section covers the empirical aspects of evaluation and the presenter section focuses on communicating findings to the stakeholders. Finally an evaluation plan can be printed out.

This study provided formative feedback on the toolkit and summative information about its impact  The latter is not done in this paper as time period did not allow. Observational studies were used which involved participants working through the toolkit in a 'talk-aloud' protocol with an expert on hand to provide support and guidance. Then at the end of the session, users provided further feedback in the form of a focus group. Participants in this stage were all novices to the field of evaluation.
After this first round, modifications were made to the toolkit and then a second round through the form of a workshop was carried out. Participants in this stage were from a range of backgrounds and experiences.
The study showed that working through the toolkit allowed users to design evaluation strategies tailored to their local needs, rather than simply falling back on familiar but inappropriate methods. Some limitations were noted in that the time to work through the toolkit is about 4.5 hrs which may make it unsuitable for smaller projects. The other item of note was that some users felt uncomfortable when presented with an evaluation approach which was uncomfortable to them. A rank ordering of options may be a better solution.