Thursday, October 24, 2013

case study method - draft

What data will be collected and how?

Invitations to participate in this study were sent to the awardees of all Learning and Teaching grants in 2012. There were three respondents who agreed to become a case study. The projects were from the School of Education; the Faculties of Human Sciences and Arts; and the Office for First Year Experience (FYE). Each project had a project leader and a project manager. Both were briefed on the case study protocol (see Appendix A). This included a minimum of three interviews with the project manager over the duration of the project (beginning, middle and end) and an interview with the project leader. These interviews were recorded for transcription, then transcripts were sent to the participants for review. Access to documentation including grant applications, minutes of any project meetings and final reports was also part of the case study protocol. There was also a request to attend any project meetings as a participant observer. This was only possible in one project.

How will the data be analysed?

Data generated in case study research can become unwieldy...(because?). The amount collected for this study was kept to a minimum by following just three projects. Each project was analysed separately and results compared. Content analysis of the interview and project documentation data was carried out using the manual extraction of themes (Krippendorf, date).

limitations to the methodology:
generalising results from just three cases can be problematic. However results could then be used in a pilot or wider study.

Introduction - case study

30 minute writing exercise:

In the higher education sector, learning and teaching research projects constitute one avenue of funding. These projects are either internally or externally funded by various private and governmental bodies (reference). With the move towards greater accountability of public funding and the drive to increase quality education, the necessity to incorporate evaluation measures is growing. Not only to include evaluation but to build evaluation into the project life-cycle both systematically and rigorously. There is a wide body of literature on the evaluation of learning and teaching and the utilization of evaluation results to improve the student experience (references here).
A recent review of the learning and teaching project evaluation literature indicated that there is little evidence of the scholarship....
But there is little to no empirical research on the evaluation of learning and teaching projects in the higher education sector and how the benefits can be realised in practice. One recent research study conducted by the authors has begun to invesigate the factors that inhibit the use of evalaution practices in the sector nad found that project leaders perceptions are an influencing factor. However that paper called for further study into this area, more specifically to observe a learning and teaching project to examine how the perceptions play out over time and investigate what can be done in practice to support the effective use of evaluation for wider scale benefit to the organisation / institution and ultimately the student experience.
This study aims to investigate the evaluative measures used during three internally funded learning and teaching projects, and explore how the project leaders' perceptions of evaluation affected their praxis. Furthermore, this study will consider what can be done to overcome barriers to successful evaluation implementation and therefore enhance this praxis.

writing sandwich

 The following was part of writers retreat:


area/discipline: education
topic: learning and teaching projects
sub topic: evaluation of such
2nd sub topic: higher education sector

working title:
a case study of the evaluation of learning and teaching projects in higher education: practices and perceptions

Problem:

In the higher education sector, grants can be obtained from internal and external sources to fund research into learning and teaching. The internal grants tend to be of around 12-18 month duration with specific guidelines as to how the projects should be conducted and results disseminated. Evaluation of these research projects should enable the research to become more rigorous and to supply evidence that the research is meeting its aims. However there some evidence from the literature that this evaluation is not taking place. One such study has investigated a number of completed internally funded learning and teaching projects and shown that evaluation is influenced by the perceptions of the project leaders.

Aim:
The aim of this study is to follow three project leaders as they conduct their research projects and invistigate how the evaluation unfolds. To identify factors that are at play which influence the evaluation. Observe what and how they evaluate and...


WOW factor of my project:
If money is to be spent wisely on research in learning and teaching, results and findings  of such research projects need to make an impact. We need to reflect and learn from such findings and incorporate them into the development cycle. This will ultimately result in better student learning. The findings from this study can be used to educate researchers about the value of evaluation and get them to see it as a learning mechanism rather than a punitive one. We also need to find a way to incorporate evaluation findings into the next round of projects such that they too can make an impact. So in a way to generalise findings so that a wider audience can benefit from the project.

Brown's 8 questions

Planning the next paper (4):
  1. Who are the intended readers: grant bodies/grant application reviewers/ grant applicants
  2. What did you do: Three recipients of locally funded learning and teaching project grants were 'followed' as they conducted their research projects over 12-18 months. There were three interviews with the project managers (beginning middle and end of project). And one interview with the project leaders (the person who applied for the grant and had the initial idea). Other project documentation was also used as evidence / data and project meetings were attended were relevant. The questions which guided the data collection were developed from the literature.
  3. Why did you do it: To collect evidence as it was being generated rather than retrospectively when people have to 'recall' information which may be influenced by their perception of evaluation and other unknown factors. To see how perceptions changed over the course of the project and what influencing factors caused these changes.
  4. what happened: all three had different experiences and perceptions of evaluation to begin with. This appeared to dictate how they conducted the evaluations and what they evaluated and what they did with any evaluation data.
  5. What do the results mean in theory: don't know at this stage....
  6. What do the results mean in practice: production of guidelines for project evaluation for/from the grant funding body. Development of a framework to guide project evaluation could arise from the findings.
  7. What is the key benefit for readers: the grant bodies (ie application reviewers) could use the findings to guide their application and review processes. Academics and project leaders could use this framework to support them in their learning through the project and enable the institution to benefit more widely from the projects. 
  8. What remains unsolved: How this framework could best be implemented, feedback from use of the framework.



Tuesday, August 13, 2013

progress check

Well its half way through my 2nd full year of PhD and still finding my way. This year feels like it has mostly been writing and rewriting. Phase 1 findings have been 90% written up into a conference paper that was originally intended for the Australasian Evaluation Society Conference in Brisbane. It was accepted but then funding from the department was not forthcoming. So a last minute change of plans required a rewrite of the abstract and have recently received acceptance to The 5th Asian Conference on Education. It's in Osaka but I will hopefully present virtually (funding pending). My dilemma now is how to reduce what is about an 8 and a half thousand word paper into just 5000 for the conference. Also how do I ensure the paper is selected for the journal as a publication is really my intended outcome for participating in this conference.

My literature review has also been submitted to the Journal Studies in Educational Evaluation. This was peer reviewed and received considerable constructive feedback. However the editor did invite resubmission so that has been done and I'm now awaiting the second round of feedback (or rejection).

So it is time to return to data collection for phase 2. I'm transcribing interviews from phase 2 and need to soon turn to analysis and writing. But first need to do wrap up interviews for phase 2 case studies.

What have I learned from all this writing? That improvements can always be made; that people see and read different meanings than you sometimes intend; that writing short (5000 word) articles is difficult; that I am able to write.

Also, I need to do some more work on theory. I have briefly touched on the use of pragmatism or realism but it might be interesting to try and write either a short paper or definitely more for the chapter in the thesis.

And finally whilst considering how to cut down paper on phase 1, we have discussed the possibility of removing the Leximancer information and either trying to publish something on the methodology, using the results tables as examples or  putting that information into the thesis.


A call for a study of evaluation practice


Improving Evaluation Theory through the Empirical Study of Evaluation Practice

Nick L Smith, 1993

The author states that few studies have been done on the practice of evaluation but that these are necessary in order to develop evaluation theory. The relationship between practice and theory often arises during metaevaluations as these tend to highlight the problem of translating current evaluation theory into acceptable evaluation practice. The author calls for increase in the number of research studies on evaluation practice (as opposed to evaluation of evaluation practice). At the time of writing this paper, Smith explains that ‘much of the current empirical justification of evaluation theories is from self-reported cases of what works’ (p238). He also quotes from Shadish, Cook and Leviton (1991) in this regard. Another 1991 paper by Marvin Alkin which studied the factors that influence theorists to change their conceptions of their own models or approaches is: increased experience in conducting evaluations, and accumulation of personal research on evaluation practice.

Smith explains that studies of evaluation practice are needed in order to know:
·      What works and how to make it happen
·      How to avoid bad practice
·      How local contexts differ and what works across different contexts
·      Where the problems of inadequacies of evaluation practice could be ameliorated by improved evaluation theory
If theories are presented in abstract, conceptual terms rather than in concrete terms based on how they would be applied in practice then we cannot know how practitioners actually articulate or operationalize various models or theories or whether they do so at all. So it becomes unclear what is meant when an evaluator claims to be using a particular theoretical approach. And if theories ‘cannot be uniquely operationalized then empirical tests of their utility become increasingly difficult’ (p240). Furthermore, if alternative theories give rise to similar practices, then theoretical differences may not be practically significant.

Smith discusses the use of contingency theories, an approach considered to be the strongest type of evaluation approach (Shadish, Cook and Leviton, 1991). These theories ‘seek to specify the conditions under which different evaluation practices are effective’ (p240). He then goes on to link this approach with the need for theoretical development alongside studies of practice. He calls for a continuation of public reflection by evaluation practitioners alongside more independent empirical studies of evaluation practice by evaluation researchers.

This article is some 20 years old now - need to check more recent articles from same author to see what has been done. Also check for articles citing this one.

Saturday, June 1, 2013

Evaluation theory vs practice


What Guides Evaluation? A study of how evaluation practice maps on to evaluation theory.

Christina Christie 2003

This study came in response to repeated calls from theorists for more empirical knowledge of evaluation which would in turn help explain the nature of evaluation practice. A study with similar aims was carried out in 1987 by Shadish and Epstein however their study makes use of a survey instrument designed by the researchers. The Christie study realizes the fact that ‘theoretical orientation often cannot be accurately assessed through direct questioning because evaluation practitioners usually are not proficient in theory (Smith 1993), and so are unable to identify their particular theoretical orientation’ (P.9). A case study approach to observe behaviour of evaluation practitioners is the usual place to go however this offers depth but cannot cover breadth of understanding of how evaluators use theory to guide their work. This study is unique in that it uses eight distinguished evaluation theorists with a broad array of perspectives to construct the survey instrument. These theorists are Boruch, Chen, Cousins, Eisner, Fetterman, House, Patton and Stufflebeam.

The conceptual framework is built on the work of Alkin and House (1992) using their taxonomy of three dimensions: methods, values and use. Each dimension has a continuum that further defines it. For methods, its quantitative to qualitative; for values its unitary to pluralist (criteria used when making evaluative judgments); and for use its from enlightenment (academic) to instrumental (service-oriented). Each theorist was asked to submit one statement ‘demonstrating the practical application associated with his theoretical standpoint related to each of the three dimensions’. P11. They were also invited to submit up to six additional statements and in total, the final instrument contained 38 items related to evaluation practice and was piloted with 5 practicing evaluators.

The participants in this study were from 2 groups. The theorists were asked to complete the survey instrument and then 138 evaluators working on a statewide Californian educational program called Healthy Start. Many of these were not evaluators but school and program administrators and so represent a cross section of how evaluations are being conducted today. This group was also subdivided for reporting of results into internal and external evaluators. The collection of demographic data produced some interesting findings. The majority of evaluators were female, white and over 40. In terms of education, 75% of the external evaluators were PhD qualified but this aligns with the years of experience and self-rating of their evaluation knowledge and skills.

And a very interesting finding - only  a small proportion of evaluators in this study sample were using an explicit theoretical framework to guide their practice. More on this later.

The analytic procedure used multidimensional scaling (MDS) whereby observed similarities (or dissimilarities) are represented spatially as a geometric configuration of points on a map. More specifically, this study used classical multidimensional unfolding (CMDU) which is an individual-differences analysis that portrays differences in preference, perception, thinking or behaviour and can be use when studying differences in subjects in relationship to one another or to stimuli (p14). Furthermore, to interpret the CMDU results in this study, ALSCAL (Alternating Least-Square Scaling Algorithm) was used to produce two dimensions: scope of stakeholder involvement and method proclivity.

The first dimension ranged from users being simply an audience for the evaluation findings, to users being involved in the evaluation at all stages from start to finish. The second dimension, method proclivity is the extent to which the evaluation is guided by a prescribed technical approach. One end of this dimension would be characterized as partiality to a particular methodology that has as a feature predetermined research steps. Boruch anchored this end of the dimension as the experimental research design is at the centre of his approach to evaluation. The other end of this dimension represents partiality to framing evaluations by something other than a particular methodology with predetermined research steps. Patton for example falls to this end by use of his utilization-focused evaluation which is flexible in its nature calling for creative adaption during its problem solving approach.

So in this way, plotting the theorists’ responses on the two-dimensional axes helped to name and clarify the dimensions. The next step was to map the evaluators practice onto the dimensions in order to compare how practitioners rated against theorists. Evaluators were divided into two groups, internal and external due to noted professional characteristics. Results indicated that generally, stakeholders have a more limited role in evaluations conducted by internal evaluators than those conducted by external evaluators. In addition, internal evaluators are more partial to methodologies with predetermined research steps than are external evaluators. This analysis depicts only a broad depiction of their practice. By investigating placement in each quadrant of the map, a more comprehensive understanding is produced.

In general, external evaluators practice was more like the theorists. More specifically, they were most closely aligned with the theorist Cousins. Furthermore although they are concerned with stakeholder involvement, they are partial to their methods and conduct evaluations accordingly. Internal evaluators were distributed evenly between the theorists which reflect the diversity in their practices and implies that we cannot generalize about their categorization into any one genre of theoretical approaches.

Christie goes on to discuss the theorists results and concludes that through this study it has become evident that even though theories may share big picture goals, they don’t necessarily share the same theoretical tenets or similar practical frameworks. In addition, ‘the prescribed practices of a theory are not necessarily best reflected in its name or description’ (p.29). Her other major point was that ‘despite some theoretical concerns related to stakeholder involvement, all of the theorists in this study do involve stakeholders at least minimally’ (p.30). However many theorists have not chosen to incorporate changing notions of such involvement because of ‘a common perception that… it is understood to be a part of the evaluation process, no matter one’s theoretical approach’ (p.31).

In relation to practicing evaluators, they are often intimately involved in the program and therefore assume they understand how other stakeholders think and feel about the program and hence don’t tend to involve them a great deal. Politics may play out more heavily with internal evaluators and may influence their decision not only on stakeholder involvement but also to their emphasis on prescribed methods. In terms of evaluator bias, the study found that internal evaluators may be aware of the importance of objectivity and tend towards more quantitative methods to increase the credibility of their findings. External evaluators on the other hand may be influenced by their colleagues’ perception of their methods used, with the thought that criticism could jeopardize potential for future work. Therefore both types of evaluators employ method-driven frameworks influenced by the perception that the results are more defensible. And finally, this study shows that theory is not requisite to evaluation practice, in fact evaluators adopt only select portions of a given theory. Even ‘those who did claim to use a particular theory did not correspond empirically with the practices of the identified theorist’ (p33). Therefore Christie concludes that ‘the gap between the common evaluator and the notions of evaluation practice put forth by academic theorists has yet to be bridged.’ (p34).

Saturday, May 18, 2013

Two new directions

I was introduced to the appraisal framework (Martin & White 2005) recently which would provide a nice theoretical grounding to further work on conceptual conflation of research vs evaluation as well as perceptions of evaluation in general.
Website and corresponding references here: http://grammatics.com/appraisal/index.html and an overview here: http://grammatics.com/appraisal/AppraisalOutline/Framed/Frame.htm
and latest book: http://students.lti.cs.cmu.edu/11899/files/LanguageofEvaluationBook.pdf

Another thread I need to follow is a special edition of New Directions fo Evaluation (2003): 
The Practice-Theory Relationship in Evaluation and in particular the editors study: 
What Guides Evaluation? A Study of How Evaluation Practice Maps onto Evaluation Theory
and Henry and Mark's: Toward an Agenda for Research on Evaluation
Rounding off with Alkin's: Evaluation Theory and Practice: Insights and New Directions

Friday, May 17, 2013

Nvivo

Have recently completed a 2 day training course on this software. Whilst I'm not certain about the possibility of using this (as I do not use a PC), I think the sessions got me thinking about my data and what I could consider.

It's really a very good tool for organising and linking all of your documents and links etc together. What I did manually with a Word doc was very similar to what it can do for you. It has great tools for graphing/plotting your coding but you really have to put in the same effort as if doing it manually. I think Leximancer is more powerful - but I lack an understanding in the detail with Leximancer.

Saturday, March 9, 2013

My Case Study Protocol (draft)

Questions directing this phase of the research:
Why was a particular evaluation approach chosen and how successful was it?
How can we overcome barriers to successful evaluation praxis?
How does perception influence evaluative practice?

Overview
It has been shown that evaluation of locally funded learning and teaching projects is not reported in the literature (reference - lit review). The studies that are reported have been analysed and applied to the context of HE to find that xxxxx
Three cases will be studied to further explore these findings. Each case is an internally funded learning and teaching project, running for 12 months. These projects came from a successful application for a grant. One was taken from each of the currently running grant programs at one Australian university, the teaching innovation and scholarship; the competitive grants and the priority grants [is this true?].  Project 1 comes from the department of Education and is titled 'MOOCS'; Project 2 is a joint research study between the Faculty of Arts and the Faculty of Human Sciences, investigating feedback methods; Project 3 is from the Office of Social Inclusion (?) and is investigating the embedded mentor program.

why did we select these? They were only ones that volunteered... Wanted to follow along with them as they progress - find out what they were doing in terms of evaluation.

Procedures
General plan:
  • Receive Ethics clearance.
  • Meet with the project team (if there is one). Show the list of questions which will be used as part of the data gathering instrument. Answer any of their questions about the study.
  • First 'interview' for follow up on answers plus some background to project. Take information and consent forms. Record interview.
  • Attend all of their project meetings (where possible) or presentations or focus grups etc and take notes which I will use in my reflections. Act as participant-as-observer.
  • Meet two more times for 'interview', once after the progress report is due and again at the end of the project - perhaps after the final report is submitted. Each time, there will be a set of questions to be answered. Obtain further clearance from Ethics for these questions.
  • Review other documentation including minutes of meetings, reports and application etc.

Questions
were there any identified barriers  and what did they do to overcome these (project leaders)
what approaches did they choose and why (project manager)
what was written about evaluation in the application (documents) and how does this compare to what actually happened (self reflections and interview data)
how did their individual experience with evaluation affect the way evaluation was conducted? (self reflection)
how were the projects similar and different in respect to their chosen approach and their identified barriers and their methods for overcoming barriers.
What does the literature say about such barriers (refer to lit review)
How important is the evaluation component in the selection of successful grant applications? (grant selection committee)

Report

At this stage I'm hoping to publish the findings from this case study in a journal or conference publication. Although having read Yin he suggests it is difficult to get this done, I will nontheless give it a go. I'm thinking about the corroboration of evidence and also the investigation of alternative propositions. If I could present something at a seminar or similar and get people to share like experiences/findings, this would be a great way to validate the findings of the study.

So who is the audience for this study? Who would benefit from the findings? I would say it is future project / grant holders. A list of guidelines to help with L&T project evaluation would certainly be helpful, aimed at how to overcome barriers etc. In addition, if we want to deal with the perception angle - then publishing findings in this area is one way to gather input.

Thursday, March 7, 2013

Collecting Case Study Evidence

There are six major sources of evidence discussed in this chapter of Yin (2009). 

Documentation. Such as letters, emails, diaries, agendas, meeting minutes, proposals, progress reports, formal studies or evaluations of the same case and news clippings or other relevant media articles. Documents cannot be accepted as literal recordings of events but their value is in corroborating and augmenting  evidence from other sources. Inferences can be made but must be backed up by observed data, remembering that documents are written for a specific audience and purpose.

Archival records.  Such as census or statistical data, service records, budgets or personnel records, maps and charts, survey data about a particular group. Again these would have been produced for a specific purpose and audience so this must be taken into account when determining the usefulness of such data.

Interviews. These should take the form of guided conversations rather than structured queries. Yin stresses the importance of not asking interviewees the 'why' questions of your study directly but asking 'how'. This has a much softer, less accusatory tone. There are different types of interview, the in-depth and the focused interview. The former asks participants about the facts of the matter as well as their opinion of events. This obviously needs to be corroborated by other sources and also it is wise to search of contrary evidence to strengthen these findings. In the focused interview, you follow a certain set of questions derived from the case study protocol. Care must be taken not to ask leading questions. A third type of interview consists of a more structured questions along the lines of a survey. For all types of interview though, care must be taken with the fact that responses are subject to problems of bias, poor recall and poor or inaccurate articulation. As such, all interview data should be corroborated with other sources.

Direct Observation. This provides opportunities for gathering additional data about the topic under study. multiple observers can increase the reliability of this method though not always a practical option.

Participant-observation. Similar to the previous method, but with the advantage of a more in-depth understanding of the case due to taking on a role within the scenario. There are problems associated with this method however and these are related to potential biases that may occur in getting too involved. There is also the situation where the time to participate leaves little time to observe.

Physical artifact. These can be collected or observed and could include such items as a tool, a work of art, a printout of computer statistics such as usage etc. They do have less potential relevance in most typical case studies but can be used as supplementary evidence sources.

During collection of data from any of these six methods, there are three principles which can maximise the benefits of the data collection phase by dealing with the problem of construct validity and reliability of the case study evidence.



1. Use multiple sources of evidence. This has the advantage of using converging lines of enquiry, a process of triangulation and corroboration. Construct validity is addressed since 'multiple sources of evidence provide multiple measures of the same phenomenon'.(p117)



2. Create a case study database. This concerns how we organise and document the data collected and if done well (systematically) can increase the reliability of the case study. The aim is to allow independent investigation of the data should anyone want to query the findings or final case study report. There are four possible components to a database. 

  • case study notes - these may come from interviews, observations, document analysis etc but they must be organised in such a way that they can be easily accessed later on by external parties (and yourself).
  • case study documents - this can be weildly  but annotated bibliography of such documents can be useful. another method is to cross-reference documents in interview notes (say). As previously these must be organised for easy access.
  • tabular materials - this could be a collection of quantitative results such as survey, observational counts or archival data.
  • narratives - this is the case study investigators open-ended answers to the case study protocol's questions. Each answer represents an attempt to integrate the available evidence and to converge upon the facts of the matter or the tentative interpretation.


3. Maintain a chain of evidence. This will allow an external observer to follow the derivation of any evidence from the initial research questions to ultimate case study conclusions and thus increase the reliability of the case study. The chain runs from questions to protocol to evidence to database to final report, each linked forward and backwards (see p123 for a diagram).

Saturday, March 2, 2013

Case study data collection

Yin (2009) identifies a number of skills required for a good case study researcher. One must be:
  • a good listener
  • able to ask good questions -
  • adaptive and flexible
  • have a firm grasp of the issues being studies and 
  • unbiased by preconceived notions
A good way to approach this last point, is to report preliminary findings (to a conference or colleagues) and look for alternative explanations and suggestions. If these can be refuted then the likelihood of bias will have been reduced. 

In the case whereby a team is to conduct the case study research, Yin recommends training for the group as this can uncover a number of problems. Flaws with the case study design or research questions; incompatibilities within the investigating team; any impractical time deadlines or expectations of resources. Positive features may also be uncovered at a training session such as team members' complimentary skills.

Case study protocol. As mentioned before this is a major way of increasing the reliability of case study research. Recommened sections of this protocol include: an overview of the project; field procedures; case study questions; a guide for the case study report.

The overview includes background information such as the context and perspective. It also covers the substantive issues being covered, which would include the rationale for selecting the cases, any propositions or hypotheses being studied and the broader theoretical or policy relevance to the inquiry. Finally, the overview would include any relevant readings relating to the issues.

Field procedures refers to being prepared for the unexpected. Since the case study depends heavily on the interviewee, the researchers must be prepared to be flexible and responsive to the interviewee, must bring along all equipment to the field location and be prepared. This includes receiving ethics clearance where appropriate and providing information and consent forms for signing, from participants.

Case study questions. The questions in a case study protocol are different from those in a survey because they aim to reflect the actual line of enquiry and not a respondents point of view. The protocols questions are specific reminders of what information needs to be collected, and why so their purpose is to keep the researcher on track. There are different levels of questions ranging from the interview questions, to the questions to be asked of the case, to the questions to be asked of nthe pattern of findings across multiple cases, plus questions to be asked of literature or other published data related to the study and finally, possibly, to normative questions about policy documentation and conclusions. Yin uses a nice description for distinguishing particularly between the first two levels of questions in that 'The verbal line of inquiry is different from the mental line of enquiry' (p87).
Great care must be taken not to confuse the data collection source with the unit of analysis.

In my study, I am interested in an individual as the unit of analysis. However information should also be sourced (and questions asked) from the organisation - from reports, other employees and managers etc. I'm now thinking it may be a good idea to interview the project leaders not just the project managers. ie Matt, Mitch and Justin. And so need to come up with a list of questions to gather data from these sources. Also, I'm not crystal clear about my unit of analysis. I'm thinking it is the individual project but maybe I should be looking at the person as they are the ones exhibiting the behaviour, attitudes and perceptions about evaluation....

The final element is the guide for the case study report. One must think ahead to the audience for the report (as in a good evaluation) before data collection begins but also the format and outline of what will be included. Unlike other research methods where the sequence of events in the research are linear, a case study approach requires this out-of-step- method so that all possibilities for data collection are considered and reconsidered throughout the study, allowing for redesign as we go along. The other consideration for the report is how to include the large amount of documentary evidence that may form part of the data collection phase. This can be included as an annotated bibliography thus directing readers to locations if they require further evidence. One final note rom Yin here is that other research methods are often dictated by journal requirements but as case-study research is less likely to make it to a journal publication, the researcher is able to be more flexible in their approach to the method.

Yin completes this chapter with a short discussion on how to screen the candidates for a case study. Reasons for selection should be included in the protocol. Pilot case studies are also recommended whereby the pilot test offers the opportunity for formative feedback on the research questions and even some conceptual clarification for the research design.

Wednesday, February 20, 2013

Conferences 2013

{requirements - needs to have evaluation as one of the themes and must be in Australia (preferably Sydney)}

Possible contenders:

Australasian Higher Education Evaluation Forum (AHEEF) 2013 - host is Uni TAS but not advertised any info as yet.
- has refereed papers - 2.5 days.

Australasian Evaluation Society - Brisbane 2-6 september (Micheal Scriven is Keynoting)
- only require presentation proposal (due 21 March) 450 words. Presenters encouraged to submit paper to Evaluation Journal of Australasia (refereed).

To Do list for March 2013

Phase 2
  1. Write up my case study protocol
  2. organise catch ups with the three projects - preferably to attend their project meetings
  3. send transcripts to PMs for their feedback
  4. write new questions to ask the project leaders and get ethics clearance.
Phase 1
revisit the article reporting on the results - change style from a report to a story

Lit Review
  1. Finalise this paper and submit to journal 
  2. build chapter 2 of thesis with content deleted from that paper
Other
  1. Find a conference for this year to share phase 1 findings
  2. apply for funding - FoHS
  3. submit conference application
  4. submit update budget for the year - to include Nvivo training

Sunday, February 17, 2013

case study design

Returning to Yin (2009) to review the details of Case Study research.

Many social scientists believe that case studies are only appropriate for the exploratory phase of an investigation. Yin describes how this is not the case. There are three conditions to be asked about the use of the case study. i) the form of the research question - case studies answer the how and why questions; ii) case studies do not require control of behavioural events and iii) the case study focuses on contemporary events.
The case study relies on two sources of evidence, direct observation of the events and interviews of the persons involved in the events.
Case studies like experiments are generalizable to theoretical propositions and not to populations or universes. Case studies can offer important evidence to complement experiments since experiments are limited in their ability to explain how or why a treatment necessarily worked.
Yin uses two features of a definition, one is about the scope and the other is based on the data collection and data analysis strategies.
He also discusses the critiques against case study research. He insists that case studies can and do use a mix of qualitative and quantitative evidence. They also have a distinctive place in evaluation research.

There are 5 components to a good research design:
1. the research questions - how or why is most common in case study research
2. the propositions (if relevant) - an exploratory study may not have these
3. the unit(s) of analysis - previous literature is often a guide for defining this
4. the logic linking the data to the propositions - eg pattern matching, logic models etc
5. the criteria for interpreting the findings - this is not usually done via statistical methods but often uses rival explanations for findings  to clarify current findings - this needs to be anticipated though to ensure correct data collection

Theory development is the ultimate aim of the design process. This may be in the form of developing or testing a theory. So 'the complete research design embodies a "theory" of what is being studied'. p.36

Now the question is, what is MY theoretical framework or philosophy underpinning my study? Or at least in phase 2 where I am using a case study approach.

Looking back to my proposal, I have stated the use of emergent realism, or pragmatism as a theoretical paradigm for my study since emergent realists do not insist that theirs is the only form of evaluative enquiry, on the contrary this paradigm encourages other forms and approaches to evaluation (Mark, Henry, & Julnes, 1998). Proponents of a pragmatic approach to mixed-methods designs state the importance of practicality, contextual responsiveness and consequentiality as important factors for success (p.44, Datta, 1997). Since the aim of this study is to investigate the most efficient evaluation strategies, as well as barriers to their success, utilising the theoretical framework of pragmatism, provides the benefit of enabling a consideration of what has previously worked (and what has not).

With any research design, we need to judge its quality, there are four common criteria for doing this. Yin explains how each criterion can be applied to the case study method.
  • Construct validity - use multiple sources of evidence, establish chain of evidence or have key informants review case study draft. This is done at the data collection phase or in the third criterion, at the composition phase.
  • Internal validity - pattern matching, explanation building, address rival explanations, or use logic models. These are all done at the data analysis phase.
  • External validity - use theory (single case studies) or replication (multiple case studies). This is done at the research design phase .
  • Reliability - use case study protocol or develop a case study database. These are done at the data collection phase.
Internal validity is mainly a concern for explanatory case studies as it becomes difficult to make inferences. With external validity care must be taken not to not to fall prey to the criticism that case studies cannot be generalised to a larger audience. This happens because there are no statistical generalizations being made but rather analytic generalisation. In terms of reliability, we must be sure to document procedures carefully.

The next step for me is to re-examine the different types of design (single vs. multiple and holistic vs. embedded). I had gone with the most complex of the matrix of 4, the multiple embedded with multiple units of analysis. Let's now break this down and explain what it means. I'll then define each of these elements for my study.

There are five main rationales used for using a single case study. A critical case, an extreme or unique case, a representative or typical case, a revelatory case or a longitudinal case ie looking at how things change over time. Within the single case option, the case can either be holistic or embedded. A common problem however of the holistic type of design may be that it lacks sufficient clear measures or data. In the embedded design, subunits of analyses are developed to focus the case study. These can stop the case slipping from its original intent.

Multiple case designs consist of more than one case within a study. This is the design I have chosen for stage 2. The evidence from multiple cases is considered more compelling leading to a more robust study. A mistake ofen made is to consider multiple cases as you would multiple respondents in a survey (which is sampling logic). This is not the case, instead each case should be 'replicated' in its design such that each can predict similar results (literal replication) or contrasting results for anticipated reasons (theoretical replication). So case studies are not the place for testing the presence of phenomena.

Again we turn to the option of holistic or embedded. When an embedded design is used, each study may  include the collection and analysis of quantitative data such as a survey. The choice between these two designs rests upon the type of phenomenon being studied and the research questions.

Looking back to my proposal, I stated the following research questions for phase 2:
-->
  1. What is understood by evaluation?
  2. What can be done to overcome barriers to successful project evaluation praxis
  3. How successful was the modified approach to evaluation?
  4. How can an interactive evaluation instrument benefit the process of evaluation of learning and teaching projects (if at all)?
 I now need to rethink these in light of the fact that case studies are designed to answer the how and why questions. I think i have begun to answer the first question in my lit review and phase 1. Having identified the barriers in phase 1, i now want to test them in this phase. So perhaps I need to rethink question 2. How can we overcome.... Question three perhaps should ask about their approach as each of the three cases is using a different evaluation approach. So why did you chose that approach and how successful was it. At this stage I think Q4 could be redundant. If the need for such an instrument emerges from the studies then that can become the question in the next phase.  





Wednesday, January 30, 2013

writing and writing and writing

Peer review is a funny old thing. My experience of publishing to date has not changed. Writing about something,  anything, is very subjective. You do it in a way that makes sense to yourself. But will it make sense to others? People have their own preferred style and if yours happens to align with your reviewers then a much smoother journey towards publication will be had.

I'm currently 'resting' my lit review and critique of the project evaluation literature. I've spent too many hours crafting and reading and rewriting. I would love to get it published (wouldn't anyone) but am considering just including it as part of the thesis only.

We had a discussion about this, this week ie whether to continue with Thesis by Publication or to revert to plain old thesis. My thoughts are that publishing matters. If i take time to research and find out information (and write it down) then I want to share that while it is current. As I'm planning to take a long while to complete my PhD, then in 6 or more years time, a lot of this information will be superseded. So I think for now I am sticking with it this way (by publication) and will continue to write and be reviewed.


Sunday, January 27, 2013

Another lit review

Parylo, O. (2012). Evaluation of educational administration: A decade review of research (2001–2010). Studies in Educational Evaluation, 38(3–4), 73–83. doi:10.1016/j.stueduc.2012.06.002

This long article (>10,000 words) which reviews the literature and then concludes by calling for a further research study into the topic due to the lack of publications is exactly what I was hoping to do.
There is an introduction, overview of the topic (literature), methodology (which includes list of data sources and method used). The findings section includes a summary of 8 evaluation journals and the types /topics of articles included therein followed by a summary of the relevant articles and then thematic trends. The discussion section is just 4 paras and the implications just 2 paras and significance and conclusion 2 paras. The section on limitations describes some good points and I feel this is an area missing from my paper. It mentions that unpublished data is overlooked, coding only done by one researcher hence researcher bias. In the implications section, a call is made for further research into the foci of educational evaluation and their purposes. The evaluation due to grant requirements is mentioned and a call is made 'to better understand how program evaluation is being used in education and what should be done to improve its effectiveness' p.81.
I particularly like how the method is referenced (p76):
The type and purpose of program evaluation articles were determined according to the classification of Stufflebeam (2001). Overall, this analysis used common strategies of qualitative content analysis (as summarized by Romanowski, 2009): (1) careful examination of the textual data (i.e., published articles); (2) data reduction (i.e., selecting those articles that would help to answer research questions); (3) organizing condensed data (i.e., organizing the articles in groups); and (4) revisiting the data to confirm the findings.