Already I feel like i have gotten off to a better start to the year than last.
I had tried to run a focus group to get feedback on the development of the framework before moving on to constructing some kind of interactive one that I could trial. However it proved impossible to get anyone along to it at MQ.
Checked in with ethics at UTS and providing i did an amendment at MQ to include a UTS staff cohort, they were happy for me to go ahead and run the focus group at UTS (new employer).
Approval was also required by the DVC (Shirley). Once done, the invite went out and now the focus group/ workshop is scheduled for wednesday 22nd April. There are 10 participants rsvp'd. Each has a current L&T grant. MOre on that later.
Also, just in is feedback from IJEM that paper 3 (details of phase 1) has been accepted with minor revisions. So I am now looking back at data and analysis notes from 2012(!) It's dusty up there.... in here.... so am scouring this blog in the hope to find something useful since the reviewers want to see some more detail of analysis in the results and appendices.
Showing posts with label analysis. Show all posts
Showing posts with label analysis. Show all posts
Saturday, April 18, 2015
Saturday, October 11, 2014
analysis notes - phase 2
Starting a second cycle of coding on the first of three sets of data. Initial coding interspersed with some InVivo Coding and Versus coding was used in first cycle.
Codes were transposed into a spreadsheet and then colour coded using a focused coding (Charmaz, 2006) approach.
The categories thus far are:
At this point I will start with Focused coding and perhaps simultaneously note if any other previous themes appear in the current data set.
Once I complete this with the first set of data I can separately work on the 2nd and 3rd data sets (projects/ case studies). Then using Case Study methods I can compare and contrast findings from each case.
Codes were transposed into a spreadsheet and then colour coded using a focused coding (Charmaz, 2006) approach.
The categories thus far are:
- People (who are connected with a project such as steering group, audience etc)
- changing nature of projects; contextual factors
- project management information
- issues or challenges
- tame taken or timing
- types of evaluation or evaluand
- perceptions, affective language, emotions, conceptions
- communications
- quality
At this point I will start with Focused coding and perhaps simultaneously note if any other previous themes appear in the current data set.
Once I complete this with the first set of data I can separately work on the 2nd and 3rd data sets (projects/ case studies). Then using Case Study methods I can compare and contrast findings from each case.
Thursday, April 24, 2014
Coding
Just finished reading a great book by Jonny Saldana, The coding manual for qualitative research (2013, SAGE). I've learned a lot of things, here's a summary.
I should have started coding as I was collecting my data rather than waiting till the end of the data collection period which was over 18mths.
The value in analytic memos should not be underestimated. Another way of saying this is reflective comments which I guess is what I did but next time I would formalise these and use them as sources of evidence for coding.
There are so many ways to code! I've tried to identify methods that may be suitable for my study and these include structural, descriptive, In Vivo, values, provisional, hypothesis, landscaping, focused and elaborative coding.
I have begun first cycle coding using Initial Coding (Charmaz, 2006) as a starting point. I'll then look at the codes generated and decide which other method of coding may be more appropriate for the next pass at the data.
During the first cycle coding process I have added many observer comments to the transcripts which I will use as a further data source.
I've so far resisted using CAQDAS or computer aided qualitative data analysis software ie NVivo. Mainly because it does not run on Mac and I don't have a windows computer. I have tried Leximancer to do a first pass on the data without much success. May come back to this and use as a triangulation method. This book has outlined other methods that are manual but make use of excel and word to manipulate and organise data. I'm going to manually write codes on the transcripts and then use these programs to arrange codes and data.
I need to document some of this process that i end up using and include in dissertation as a chapter. Only one or two paras would make it into a journal paper.
When I look at the artifacts such as reports and applications, critically reflect on them as they "reflect the interests and perspectives of their authors" (Hammersley & Atkinson, 2007 as cited in Saldana, 2013, p.54). They can also carry "values" (Hitchcock and Hughes, 1995 as cited in Saldana, 2013, p.54)
other notes:
Simultaneous Coding: "socail interaction does not occur in neat, isolated units" (Glesne, 2011, p.192).
Structural Coding: useful for hyposthesis testing (amongst other applications) - could be phase 2 aim..
Descriptive Coding: summarise in a short sentence or few words the topic of a paragraph of data. Basic method
In Vivo Coding: applicable to action and practitioner research (Coghlan & Branick, 2010; Fox, Martin & Green, 2007; Stringer, 1999)
Process Coding: useful for studies searching for "ongoing/interaction/emotion taken in response to situations or problems often with the purpose of reaching a goal or handling a problem" (Corbin & Strauss, 2008, pp96-97)
Values Coding: reflects participant's values, attitudes and beliefs representing her or his perspectives or world-view. One application is to explore experiences and actions in case studies. Must remember though that "values coding is values laden" (p.114)
Provisional Coding: builds on or corroborates previous research and investigations.
Causation Coding: Attribution refers to reasons or causal explanations. An attribution answers the 'why?' question. We should carefully consider the nuanced differences between a cause and a reason and a motive and to keep our focus primarily on people's intentions, choices, objectives, vakues, perspectives, expectations, needs, desires and agency within their particular contexts and circumstances (Morrison, 2009).
Finally, "coding is not a precise science it is primarily an interpretive act " (p.193)
From codes to themes -
I should have started coding as I was collecting my data rather than waiting till the end of the data collection period which was over 18mths.
The value in analytic memos should not be underestimated. Another way of saying this is reflective comments which I guess is what I did but next time I would formalise these and use them as sources of evidence for coding.
There are so many ways to code! I've tried to identify methods that may be suitable for my study and these include structural, descriptive, In Vivo, values, provisional, hypothesis, landscaping, focused and elaborative coding.
I have begun first cycle coding using Initial Coding (Charmaz, 2006) as a starting point. I'll then look at the codes generated and decide which other method of coding may be more appropriate for the next pass at the data.
During the first cycle coding process I have added many observer comments to the transcripts which I will use as a further data source.
I've so far resisted using CAQDAS or computer aided qualitative data analysis software ie NVivo. Mainly because it does not run on Mac and I don't have a windows computer. I have tried Leximancer to do a first pass on the data without much success. May come back to this and use as a triangulation method. This book has outlined other methods that are manual but make use of excel and word to manipulate and organise data. I'm going to manually write codes on the transcripts and then use these programs to arrange codes and data.
I need to document some of this process that i end up using and include in dissertation as a chapter. Only one or two paras would make it into a journal paper.
When I look at the artifacts such as reports and applications, critically reflect on them as they "reflect the interests and perspectives of their authors" (Hammersley & Atkinson, 2007 as cited in Saldana, 2013, p.54). They can also carry "values" (Hitchcock and Hughes, 1995 as cited in Saldana, 2013, p.54)
other notes:
Simultaneous Coding: "socail interaction does not occur in neat, isolated units" (Glesne, 2011, p.192).
Structural Coding: useful for hyposthesis testing (amongst other applications) - could be phase 2 aim..
Descriptive Coding: summarise in a short sentence or few words the topic of a paragraph of data. Basic method
In Vivo Coding: applicable to action and practitioner research (Coghlan & Branick, 2010; Fox, Martin & Green, 2007; Stringer, 1999)
Process Coding: useful for studies searching for "ongoing/interaction/emotion taken in response to situations or problems often with the purpose of reaching a goal or handling a problem" (Corbin & Strauss, 2008, pp96-97)
Values Coding: reflects participant's values, attitudes and beliefs representing her or his perspectives or world-view. One application is to explore experiences and actions in case studies. Must remember though that "values coding is values laden" (p.114)
Provisional Coding: builds on or corroborates previous research and investigations.
Causation Coding: Attribution refers to reasons or causal explanations. An attribution answers the 'why?' question. We should carefully consider the nuanced differences between a cause and a reason and a motive and to keep our focus primarily on people's intentions, choices, objectives, vakues, perspectives, expectations, needs, desires and agency within their particular contexts and circumstances (Morrison, 2009).
Finally, "coding is not a precise science it is primarily an interpretive act " (p.193)
From codes to themes -
Saturday, February 15, 2014
Analyzing Case Study Data - Yin
The importance of having a clear analytical strategy is highlighted in this chapter of Yin (2009). It is the least developed and most difficult aspect of conducting a case study and 'much relies on the investigator's own style of rigorous empirical thinking, along with the sufficient presentation of evidence and careful consideration of alternative interpretations' (p.127).
There is a short summary of the use of computer assisted tools but a reminder comes that they are only tools and only assistants, how the investigator manipulates the data is more important. It is recommended to first 'play' with the data and some ways in which to do this are summarised. For examples to create arrays; make a matrix of categories; use flowcharts and graphics; frequency of events; statistically analyze events; organising data chronologically.
All data has a story to tell and the needed analytical strategy will help you craft the story. Yin describes four general strategies.
Relying on theoretical propositions. Those that led to your case study, shaped the research questions and the literature review and therefore led to new hypotheses or propositions.
Developing a case description. Whilst less preferable to the previous strategy, a descriptive framework in which to organise the data can sometimes be useful particularly when the data has been collected without any theoretical propositions being made. Such a framework should have been developed before designing the data collection instruments. A descriptive approach may help to identify any causal links which can then be analysed.
Using both qualitative and quantitative data. There are two reasons why quantitative data may be relevant to your study. a) the data may cover the behaviour or events that your study is trying to explain. b) it may be related to an embedded unit of analysis within the broader study. Using statistical techniques to analyse this quantitative data at the same time as analysing the qualitative data will strengthen your study.
Examining rival explanations. This can be done within all of the previous three strategies and will strengthen the analysis if collection of evidence regarding 'other influences' is carried out. These rivals can be categorised as craft rivals (including null hypothesis, researcher bias and threats to validity) or real-life rivals. The more rivals a study addresses and rejects, the more confidence can be placed on findings in a case study.
The next section outlines five analytic techniques. Each one needs to be practiced and the case study researcher will develop their own repertoire over time to produce compelling case studies.
Pattern Matching. This can strengthen a study's internal validity by comparing empirically based patterns with predicted ones. The following pattern types can be used:
No matter the pattern type chosen, the more precise measures you can obtain, the stronger your argument/case study will be.
Explanation building. When the data is analysed to explain the case. A parallel method for exploratory case studies can be used if the aim is to develop ideas for further study rather than in this case, to make conclusions. To explain something we stipulate a presumed set of causal links as to why or how it happened. Due to the imprecise nature of narratives, 'the better case studies are ones which have reflected some theoretically significant propositions' (p.141). The eventual explanation is likely to be a result of a series of iterations whereby the evidence is examined, theoretical propositions revised and the evidence examined again from a new perspective. As in pattern matching the aim is to show how rival explanations cannot be supported. The author notes however that this approach is 'fraught with dangers' and refers back to earlier advice to strengthen the study such as following a case study protocol, creating a case database and the following of a chain of evidence. If constant referral to the original source of inquiry is made and to the possible alternative explanations, the researcher is less likely to stray along a divergent path, during iterative cycles.
Time-Series Analysis. If there is just one dependent or independent variable we call it a simple time series. A mixed pattern across time can give rise to a complex time series and tracing events over time is known as a chronology. Care must be taken with this type of analysis not to simply describe or observe trends over time (this would be known as a chronicle) but rather to look for causal inferences.
Logic Models. This technique involves matching empirically observed events to theoretically predicted events and is differentiated from pattern matching because of their sequential stages. There are four types, related to the unit of analysis chosen.
In each type, you should define your logic model then 'test' the model by analysing how well your data supports it. (p156).
Cross-Case Synthesis. This techniques treats each case as a separate study. One example is the use of word tables to display data for each case according to some framework. Then cross-case conclusions can be made. However care must be taken in developing 'strong argumentative interpretation' (p160).
There is a short summary of the use of computer assisted tools but a reminder comes that they are only tools and only assistants, how the investigator manipulates the data is more important. It is recommended to first 'play' with the data and some ways in which to do this are summarised. For examples to create arrays; make a matrix of categories; use flowcharts and graphics; frequency of events; statistically analyze events; organising data chronologically.
All data has a story to tell and the needed analytical strategy will help you craft the story. Yin describes four general strategies.
Relying on theoretical propositions. Those that led to your case study, shaped the research questions and the literature review and therefore led to new hypotheses or propositions.
Developing a case description. Whilst less preferable to the previous strategy, a descriptive framework in which to organise the data can sometimes be useful particularly when the data has been collected without any theoretical propositions being made. Such a framework should have been developed before designing the data collection instruments. A descriptive approach may help to identify any causal links which can then be analysed.
Using both qualitative and quantitative data. There are two reasons why quantitative data may be relevant to your study. a) the data may cover the behaviour or events that your study is trying to explain. b) it may be related to an embedded unit of analysis within the broader study. Using statistical techniques to analyse this quantitative data at the same time as analysing the qualitative data will strengthen your study.
Examining rival explanations. This can be done within all of the previous three strategies and will strengthen the analysis if collection of evidence regarding 'other influences' is carried out. These rivals can be categorised as craft rivals (including null hypothesis, researcher bias and threats to validity) or real-life rivals. The more rivals a study addresses and rejects, the more confidence can be placed on findings in a case study.
The next section outlines five analytic techniques. Each one needs to be practiced and the case study researcher will develop their own repertoire over time to produce compelling case studies.
Pattern Matching. This can strengthen a study's internal validity by comparing empirically based patterns with predicted ones. The following pattern types can be used:
- nonequivalent dependant variables as a pattern
- rival explanations s patterns
- simpler patterns
No matter the pattern type chosen, the more precise measures you can obtain, the stronger your argument/case study will be.
Explanation building. When the data is analysed to explain the case. A parallel method for exploratory case studies can be used if the aim is to develop ideas for further study rather than in this case, to make conclusions. To explain something we stipulate a presumed set of causal links as to why or how it happened. Due to the imprecise nature of narratives, 'the better case studies are ones which have reflected some theoretically significant propositions' (p.141). The eventual explanation is likely to be a result of a series of iterations whereby the evidence is examined, theoretical propositions revised and the evidence examined again from a new perspective. As in pattern matching the aim is to show how rival explanations cannot be supported. The author notes however that this approach is 'fraught with dangers' and refers back to earlier advice to strengthen the study such as following a case study protocol, creating a case database and the following of a chain of evidence. If constant referral to the original source of inquiry is made and to the possible alternative explanations, the researcher is less likely to stray along a divergent path, during iterative cycles.
Time-Series Analysis. If there is just one dependent or independent variable we call it a simple time series. A mixed pattern across time can give rise to a complex time series and tracing events over time is known as a chronology. Care must be taken with this type of analysis not to simply describe or observe trends over time (this would be known as a chronicle) but rather to look for causal inferences.
Logic Models. This technique involves matching empirically observed events to theoretically predicted events and is differentiated from pattern matching because of their sequential stages. There are four types, related to the unit of analysis chosen.
- Individual-level
- Firm or organisational-level (linear sequence)
- Firm or organisational-level (dynamic, multi-directional sequence)
- program-level
In each type, you should define your logic model then 'test' the model by analysing how well your data supports it. (p156).
Cross-Case Synthesis. This techniques treats each case as a separate study. One example is the use of word tables to display data for each case according to some framework. Then cross-case conclusions can be made. However care must be taken in developing 'strong argumentative interpretation' (p160).
Thursday, October 24, 2013
case study method - draft
What data will be collected and how?
Invitations to participate in this study were sent to the awardees of all Learning and Teaching grants in 2012. There were three respondents who agreed to become a case study. The projects were from the School of Education; the Faculties of Human Sciences and Arts; and the Office for First Year Experience (FYE). Each project had a project leader and a project manager. Both were briefed on the case study protocol (see Appendix A). This included a minimum of three interviews with the project manager over the duration of the project (beginning, middle and end) and an interview with the project leader. These interviews were recorded for transcription, then transcripts were sent to the participants for review. Access to documentation including grant applications, minutes of any project meetings and final reports was also part of the case study protocol. There was also a request to attend any project meetings as a participant observer. This was only possible in one project.
How will the data be analysed?
Data generated in case study research can become unwieldy...(because?). The amount collected for this study was kept to a minimum by following just three projects. Each project was analysed separately and results compared. Content analysis of the interview and project documentation data was carried out using the manual extraction of themes (Krippendorf, date).
limitations to the methodology:
generalising results from just three cases can be problematic. However results could then be used in a pilot or wider study.
Invitations to participate in this study were sent to the awardees of all Learning and Teaching grants in 2012. There were three respondents who agreed to become a case study. The projects were from the School of Education; the Faculties of Human Sciences and Arts; and the Office for First Year Experience (FYE). Each project had a project leader and a project manager. Both were briefed on the case study protocol (see Appendix A). This included a minimum of three interviews with the project manager over the duration of the project (beginning, middle and end) and an interview with the project leader. These interviews were recorded for transcription, then transcripts were sent to the participants for review. Access to documentation including grant applications, minutes of any project meetings and final reports was also part of the case study protocol. There was also a request to attend any project meetings as a participant observer. This was only possible in one project.
How will the data be analysed?
Data generated in case study research can become unwieldy...(because?). The amount collected for this study was kept to a minimum by following just three projects. Each project was analysed separately and results compared. Content analysis of the interview and project documentation data was carried out using the manual extraction of themes (Krippendorf, date).
limitations to the methodology:
generalising results from just three cases can be problematic. However results could then be used in a pilot or wider study.
Saturday, May 18, 2013
Two new directions
I was introduced to the appraisal framework (Martin & White 2005) recently which would provide a nice theoretical grounding to further work on conceptual conflation of research vs evaluation as well as perceptions of evaluation in general.
What Guides Evaluation? A Study of How Evaluation Practice Maps onto Evaluation Theory
and Henry and Mark's: Toward an Agenda for Research on Evaluation
Rounding off with Alkin's: Evaluation Theory and Practice: Insights and New Directions
Website and corresponding references here: http://grammatics.com/appraisal/index.html and an overview here: http://grammatics.com/appraisal/AppraisalOutline/Framed/Frame.htm
and latest book: http://students.lti.cs.cmu.edu/11899/files/LanguageofEvaluationBook.pdf
Another thread I need to follow is a special edition of New Directions fo Evaluation (2003):
The Practice-Theory Relationship in Evaluation and in particular the editors study: What Guides Evaluation? A Study of How Evaluation Practice Maps onto Evaluation Theory
and Henry and Mark's: Toward an Agenda for Research on Evaluation
Rounding off with Alkin's: Evaluation Theory and Practice: Insights and New Directions
Friday, May 17, 2013
Nvivo
Have recently completed a 2 day training course on this software. Whilst I'm not certain about the possibility of using this (as I do not use a PC), I think the sessions got me thinking about my data and what I could consider.
It's really a very good tool for organising and linking all of your documents and links etc together. What I did manually with a Word doc was very similar to what it can do for you. It has great tools for graphing/plotting your coding but you really have to put in the same effort as if doing it manually. I think Leximancer is more powerful - but I lack an understanding in the detail with Leximancer.
It's really a very good tool for organising and linking all of your documents and links etc together. What I did manually with a Word doc was very similar to what it can do for you. It has great tools for graphing/plotting your coding but you really have to put in the same effort as if doing it manually. I think Leximancer is more powerful - but I lack an understanding in the detail with Leximancer.
Saturday, October 27, 2012
Writing
I'm currently writing up my lit review, which is taking longer than I expected. Initially I had planned to conduct a review of the literature as i progressed through the PhD. However we have discussed wring this up as a paper defining why a study of the nature I proposed for my PhD is needed. ie a rational if you like.
One of my issues with this is that there will be more literature as I go and so this can't be one defining document. Marina suggested I publish the table in which I'm synthesising findings in the papers, on a public website so it can be constantly updated. I like this idea.
On a writing retreat this week I showed my writing group the efforts so far and I have to say I was a bit disappointed with the feedback at first. The general opinion was that I shouldn't try and publish this lit review as most journals would not be interested in it as after all who am I to tell the world..? They suggested I keep writing it up but that it becomes a chapter in the Thesis rather than a journal article. That's actually fine and I'll discuss this with my supervisors next week.
So where I am at right now is trying to weave the themes that have come from my searching, with the relevant items from the articles and then craft a suitable reflection/discussion and conclusion.
Themes I'm working with are:
One of my issues with this is that there will be more literature as I go and so this can't be one defining document. Marina suggested I publish the table in which I'm synthesising findings in the papers, on a public website so it can be constantly updated. I like this idea.
On a writing retreat this week I showed my writing group the efforts so far and I have to say I was a bit disappointed with the feedback at first. The general opinion was that I shouldn't try and publish this lit review as most journals would not be interested in it as after all who am I to tell the world..? They suggested I keep writing it up but that it becomes a chapter in the Thesis rather than a journal article. That's actually fine and I'll discuss this with my supervisors next week.
So where I am at right now is trying to weave the themes that have come from my searching, with the relevant items from the articles and then craft a suitable reflection/discussion and conclusion.
Themes I'm working with are:
Formative
vs. summative evaluation
Building
capacity in evaluation skills
Non-completion
of project elements (such as evaluation, reports and the project itself)
Non-usage
of evaluation findings
Importance
of stakeholder involvement
Inaccurate
initial expectations (could be linked to #2)
Importance
of planning and defining clear evaluation criteria
Evaluation
approaches, models or frameworks
Sunday, September 30, 2012
Case Study
Studying cases allows for obtaining an in-depth understanding (through explaining, exploring, and describing) of complex social phenomena, while retaining the holistic and meaningful characteristics of real-life events (Yin 1994).
KOHLBACHER, F.. The Use of Qualitative Content Analysis in Case Study Research. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, North America, 7, jan. 2006. Available at: <http://www.qualitative-research.net/index.php/fqs/article/view/75/153>. Date accessed: 01 Oct. 2012.
Abstract: This paper aims at exploring and discussing the possibilities of applying qualitative content analysis as a (text) interpretation method in case study research. First, case study research as a research strategy within qualitative social research is briefly presented. Then, a basic introduction to (qualitative) content analysis as an interpretation method for qualitative interviews and other data material is given. Finally the use of qualitative content analysis for developing case studies is examined and evaluated. The author argues in favor of both case study research as a research strategy and qualitative content analysis as a method of examination of data material and seeks to encourage the integration of qualitative content analysis into the data analysis in case study research.
Saturday, July 21, 2012
Analysis of Phase One
The research questions for this phase are:
- What evaluation forms and approaches have been used in Macquarie funded learning and teaching projects? [easy to answer from the interview data.]
- What are the issues and challenges in evaluating learning and teaching projects?[This was the original Q11 in the interviews. Briefly, items would include:lack of skills - guidelines and support/resources neededinitial plans too ambitiousinsufficient use of Stakeholdersinsufficient money/budget - to pay for extra help or input when needed i.e. admin supportno feedback at any stage in the projectlack of time
- to plan
- for reflection/learning
- too busy with teaching and other demands]
- What is understood by evaluation? [can look at the misuse or confusion in terminology.
For example Evaluation vs. Research and Evaluation vs. Project in terms of both planning and results. There is also some misinterpretation between evaluation and feedback. Also look at Q12 from the interviews.] - How does perception influence how evaluation is carried out in practice? Or, What influences how evaluation is carried out in practice....... [for this i think i need to go through the data and pull out all examples where perception is discussed, albeit implicitly most times. Also need to look for other papers that have done this and see how they have done it. Need to refer back to the theoretical approach of the project as well as the realism paradigm]
Saturday, July 14, 2012
Thematic Analysis
Braun and Clarke (2006), offer a complete and in-depth paper on what it is, guidelines to using it and pitfalls to avoid. They state that it can be considered a method in its own right - contrary to other authors stating it is not. It is compatible with constructionist paradigms and they stress its flexible nature in its use which can sometimes cause it to be framed by a realist/experimental method (though they don't particularly go along with this).
Importance is placed on explaining 'how' analysis is conducted (as its often omitted) as well as describing what and why they are dong it that way. Terminology is defined - Data corpus, set, item and extract. [all interviews, answers or themes, one interview and quotes].
" Thematic analysis is a method for identifying, analysing and reporting patterns (themes) within data." p.79
we shouldn't say that themes 'emerge' because actually that implies that they reside in the data - in actual fact they reside in the heads of the researcher who plays an active role in identifying, selecting and reporting them.(Ely et al., 1997 and Taylor & Ussher, 2001)
How do we say a theme exists? There are no hard and fast rules - though prevalence is important. A theme could be prevalent in one data item or across the whole corpus. And it may (or may not) be present in every data set or it may be present to only a small extent.
You should think about how a theme captures something of importance to the overall research question(s). This may make it 'Key'. The the question lies in how to report it ie 'most participants' or 'some..' or 'a number of...' etc.
You need to ask yourself whether you want to provide a rich thematic description of the whole data set or do you want to provide a detailed account of just one aspect of the data.
Next, will you provide an inductive analysis - whereby you link the themes to the data ie from specific questions, or a theoretical analysis - whereby your research questions evolve from the themes.
Semantic vs latent themes ie surface level descriptive or more deeper analysis of the underlying causes, assumptions and conceptualizations - this leads towards a more constructivist approach
Back to the paradigm wars - if a constructivist paradigm is used, then the themes are built from sociocultural contexts which enable individual accounts. In comparison the realism framework allows a more simple explanation to develop since meaning, experience and language are unidirectional. ie the language is used to describe experience and provide meaning.
The paper then goes on to describe 6 steps in thematic analysis - to be used flexibly and as a guide.
1. Familiarize yourself with your data: jot down notes for coding schemas that you will come back to in subsequent phases.
2. Generate initial codes. Work systematically through the data set and identify interesting aspects in the data items that may form the basis of repeated patterns.
3. Search for themes: sort the potential codes into themes - broader analysis of whole data set. Use concept mapping
4. Review themes. this is a two step process - level 1 consists of looking at the extracts for each theme and deciding if they really fit that theme, and are coherent. if not, reassess the theme and perhaps discard extracts if necessary. Level 2 of this stage will depend on your theoretical approach and requires revisiting the whole data set and consider the validity of each of the themes.
5. Define and name the themes. Look at the data extracts for each theme and organise into a coherent account with an accompanying narrative. Look for sub-themes within a theme and finally, use clear (short) names for the themes so that the reader understands quickly what it means.
6. Produce the report. More than just describe the data, tell the story by using the extracts to make an argument in relation to the research questions.
Some questions to ask towards the end fo the analysis:
what does this theme mean?
What are the assumptions underpinning it?
what are the implications of this theme?
what conditions are likely to have given rise to it?
why do people talk about this thing in this particular way?
what is the overall story the different themes reveal about the topic?
Potential pitfalls are described:
1. no real analysis is done and the analysis is just a sting of extracts with little analytic narrative.
2. uses the data collection questions as the themes.
3. no coherence around an idea or concept in all aspects of the theme.
4. no consideration of alternative explanations or variations of the data
5. mismatch between the interpretation of the data and the theoretical framework.
Importance is placed on explaining 'how' analysis is conducted (as its often omitted) as well as describing what and why they are dong it that way. Terminology is defined - Data corpus, set, item and extract. [all interviews, answers or themes, one interview and quotes].
" Thematic analysis is a method for identifying, analysing and reporting patterns (themes) within data." p.79
we shouldn't say that themes 'emerge' because actually that implies that they reside in the data - in actual fact they reside in the heads of the researcher who plays an active role in identifying, selecting and reporting them.(Ely et al., 1997 and Taylor & Ussher, 2001)
How do we say a theme exists? There are no hard and fast rules - though prevalence is important. A theme could be prevalent in one data item or across the whole corpus. And it may (or may not) be present in every data set or it may be present to only a small extent.
You should think about how a theme captures something of importance to the overall research question(s). This may make it 'Key'. The the question lies in how to report it ie 'most participants' or 'some..' or 'a number of...' etc.
You need to ask yourself whether you want to provide a rich thematic description of the whole data set or do you want to provide a detailed account of just one aspect of the data.
Next, will you provide an inductive analysis - whereby you link the themes to the data ie from specific questions, or a theoretical analysis - whereby your research questions evolve from the themes.
Semantic vs latent themes ie surface level descriptive or more deeper analysis of the underlying causes, assumptions and conceptualizations - this leads towards a more constructivist approach
Back to the paradigm wars - if a constructivist paradigm is used, then the themes are built from sociocultural contexts which enable individual accounts. In comparison the realism framework allows a more simple explanation to develop since meaning, experience and language are unidirectional. ie the language is used to describe experience and provide meaning.
The paper then goes on to describe 6 steps in thematic analysis - to be used flexibly and as a guide.
1. Familiarize yourself with your data: jot down notes for coding schemas that you will come back to in subsequent phases.
2. Generate initial codes. Work systematically through the data set and identify interesting aspects in the data items that may form the basis of repeated patterns.
3. Search for themes: sort the potential codes into themes - broader analysis of whole data set. Use concept mapping
4. Review themes. this is a two step process - level 1 consists of looking at the extracts for each theme and deciding if they really fit that theme, and are coherent. if not, reassess the theme and perhaps discard extracts if necessary. Level 2 of this stage will depend on your theoretical approach and requires revisiting the whole data set and consider the validity of each of the themes.
5. Define and name the themes. Look at the data extracts for each theme and organise into a coherent account with an accompanying narrative. Look for sub-themes within a theme and finally, use clear (short) names for the themes so that the reader understands quickly what it means.
6. Produce the report. More than just describe the data, tell the story by using the extracts to make an argument in relation to the research questions.
Some questions to ask towards the end fo the analysis:
what does this theme mean?
What are the assumptions underpinning it?
what are the implications of this theme?
what conditions are likely to have given rise to it?
why do people talk about this thing in this particular way?
what is the overall story the different themes reveal about the topic?
Potential pitfalls are described:
1. no real analysis is done and the analysis is just a sting of extracts with little analytic narrative.
2. uses the data collection questions as the themes.
3. no coherence around an idea or concept in all aspects of the theme.
4. no consideration of alternative explanations or variations of the data
5. mismatch between the interpretation of the data and the theoretical framework.
Saturday, July 7, 2012
Bazeley 2009
This article is written about the analysis of qualitative data and in specific about the use of themes.
Short introduction on the difference between categories concepts and themes (often used interchangeably). This author uses category for the descriptive and concept for the more abstract. Other authors use concept as the lowest level and category for a group of concepts (Strauss & Corbin 1998).
The author states that producing themes as a goal of research is not much use or interest. Just describing them and using some quotes from the literature to support them is not enough to be convincing.
Suggestions made to share some portions of data with a colleague to get a fresh perspective and alternative avenues to pursue. The author writes that if describing themes is what you are doing then you need to connect them to the literature and contextualise them.'Data must be challenged, extended, supported and linked to reveal their full value' p.8
Are emergent themes really emergent? If you asked questions which produced these themes then the findings are shallow and unsubstantiated. Talks about the 'garden path analysis' (Lyn Richards) just stating what you see. Move towards a Describe-Compare-Relate approach instead.
As a starting point, describe: How did people talk about this theme, how many, what's not included. [NOTE: in my case could look at the sentiment lens in Leximancer possibly?]
Then compare differences in characteristics and boundaries for that theme. [in my study could compare different ability levels, academic levels, experience of project management levels internal vs external etc]
Then relate to themes others have already written about.
A section on creating and using displays of the data extolls the value of for developing an understanding and presenting conclusions.
Matrix displays - for detecting patterns - for facilitating comparative analysis and sometime for presenting conclusions
Flow charts and models - present conclusions
Typologies - used a a working tool and can become a final presentation tool
And finally - avoid reliance on Quotes for Evidence as this encourages superficial reporting of themes. Try not to write to the sources, voices or methods. Build a coherent argument using evidence and then '...add illustrative quotes to add interest and clarity for the reader'.(p.20)
Short introduction on the difference between categories concepts and themes (often used interchangeably). This author uses category for the descriptive and concept for the more abstract. Other authors use concept as the lowest level and category for a group of concepts (Strauss & Corbin 1998).
The author states that producing themes as a goal of research is not much use or interest. Just describing them and using some quotes from the literature to support them is not enough to be convincing.
Suggestions made to share some portions of data with a colleague to get a fresh perspective and alternative avenues to pursue. The author writes that if describing themes is what you are doing then you need to connect them to the literature and contextualise them.'Data must be challenged, extended, supported and linked to reveal their full value' p.8
Are emergent themes really emergent? If you asked questions which produced these themes then the findings are shallow and unsubstantiated. Talks about the 'garden path analysis' (Lyn Richards) just stating what you see. Move towards a Describe-Compare-Relate approach instead.
As a starting point, describe: How did people talk about this theme, how many, what's not included. [NOTE: in my case could look at the sentiment lens in Leximancer possibly?]
Then compare differences in characteristics and boundaries for that theme. [in my study could compare different ability levels, academic levels, experience of project management levels internal vs external etc]
Then relate to themes others have already written about.
A section on creating and using displays of the data extolls the value of for developing an understanding and presenting conclusions.
Matrix displays - for detecting patterns - for facilitating comparative analysis and sometime for presenting conclusions
Flow charts and models - present conclusions
Typologies - used a a working tool and can become a final presentation tool
And finally - avoid reliance on Quotes for Evidence as this encourages superficial reporting of themes. Try not to write to the sources, voices or methods. Build a coherent argument using evidence and then '...add illustrative quotes to add interest and clarity for the reader'.(p.20)
Friday, July 6, 2012
Leximancer
Searching for articles which have used Leximancer to see how best to utilise the concepts maps i am producing.
One study (Grimbeeck, Bartlett & Lote, 2004) analysis interview transcripts with a student and looked at the words/concepts produced in the students responses the interviewers questions and then both to see where overlaps occurred. This got me thinking it may be useful to see if I can answer my question about how individual's concepts of evaluation are influenced and by what, by looking at some of the text - which is not specifically answering questions but going off on tangents.
Its also true that I have noticed from the interviews this theme that the meaning of evaluation is often interpreted differently and confused with research - could this also be a theme I could pull out using Leximancer?
A paper by Cretchley (2010) reported on study of the dynamics in conversations between carers and patients. They used Leximancer to analyse the contributions and determine the concepts and determine their functions in the discourse. In their analysis they grouped together different conditions to look at patterns of behaviour. I could perhaps do this by grouping the different 'levels' ie novice, experienced or those that did evaluate and those that didn't. Or even external and internal projects say.
when quoting/discussing the software refer to (Smith & Humphreys, 2006).Smith, A. E., & Humphreys, M. S. (2006). Evaluation of unsupervised semantic mapping of natural language with Lexi- mancer concept mapping. Behavior Research Methods, 38(2), 262-279.
p.1616 in Crethcley states: "Here, it enabled us to take an exploratory approach, letting the list of concepts emerge automatically from the text. Other qualitative content analysis techniques (e.g., NVivo) require the analyst to derive the list of codes and rules for attaching these to the data, and are thus researcher driven. As a result, these methods require checks of reliability and validity. In this research, we set up the Leximancer projects in a way that allowed the intergroup dynamics to be depicted with minimal manual intervention. This approach, which is strongly grounded in the text, permits a level of reliability that is an advantage over other methods." In Crofts & Bisman (2010) the use of Leximancer is defended as it avoids the researcher - imposed coding schemes that can be inherently biased. Quotes Atkinson 1992:Atkinson, P. (1992), “The ethnography of a medical setting: reading, writing, and rhetoric”, Qualitative Health Research, Vol. 2 No. 4, pp. 451-74.
Includes a good description of how Leximancer works (with citations). And how they utilized the software - p188: For our purposes, Leximancer provided a means for generating and recognising themes, including themes which might otherwise have been missed or overlooked had we manually coded the data. Using the themes derived by the software, the researchers then went back to engaging directly with the data in order to further explore, and interpret, the meanings of the text."
David Rooney, Knowledge, economy, technology and society: The politics of discourse, Telematics and Informatics, Volume 22, Issue 4, November 2005, Pages 405-422
This article is very in depth on knowledge discourse but uses leximancer for thematic analysis. HAs a table of the top 20 ranked concepts, then uses the concept maps to show how concepts are clumped into themes. It first uses the top 8% of concepts as 'This is the lowest level at which a number of discernable semantic clusters has formed'.
Then it looks at the clusters that are relationally close and makes inferences about the 'distant' cluster. The second concept map it uses is set at 54% of concepts which then shows other themes and how they link to the main four identified themes.
Watson, Glenice and Jamieson-Proctor, Romina and Finger, Glenn (2005) Teachers talk about measuring ICT curriculum integration. Australian Educational Computing, 20 (2). pp. 27-34. ISSN 0816-9020
Leximancer is a software package for identifying the salient dimensions of discourse by analysing the frequency of use of terms, and the spatial proximity between those terms. The Leximancer package uses a grounded theory approach (Glaser &: Strauss, 1967; Strauss &: Corbin, 1990) to data analysis. It computes .the frequency with which each term is used, after discarding text items of no research relevance (such as 'a' or 'the'), but does not include every available word in the final plotted list. Constraints include the number of words selected per block of text as well as the relative frequency with which terms are used.
One study (Grimbeeck, Bartlett & Lote, 2004) analysis interview transcripts with a student and looked at the words/concepts produced in the students responses the interviewers questions and then both to see where overlaps occurred. This got me thinking it may be useful to see if I can answer my question about how individual's concepts of evaluation are influenced and by what, by looking at some of the text - which is not specifically answering questions but going off on tangents.
Its also true that I have noticed from the interviews this theme that the meaning of evaluation is often interpreted differently and confused with research - could this also be a theme I could pull out using Leximancer?
A paper by Cretchley (2010) reported on study of the dynamics in conversations between carers and patients. They used Leximancer to analyse the contributions and determine the concepts and determine their functions in the discourse. In their analysis they grouped together different conditions to look at patterns of behaviour. I could perhaps do this by grouping the different 'levels' ie novice, experienced or those that did evaluate and those that didn't. Or even external and internal projects say.
when quoting/discussing the software refer to (Smith & Humphreys, 2006).Smith, A. E., & Humphreys, M. S. (2006). Evaluation of unsupervised semantic mapping of natural language with Lexi- mancer concept mapping. Behavior Research Methods, 38(2), 262-279.
p.1616 in Crethcley states: "Here, it enabled us to take an exploratory approach, letting the list of concepts emerge automatically from the text. Other qualitative content analysis techniques (e.g., NVivo) require the analyst to derive the list of codes and rules for attaching these to the data, and are thus researcher driven. As a result, these methods require checks of reliability and validity. In this research, we set up the Leximancer projects in a way that allowed the intergroup dynamics to be depicted with minimal manual intervention. This approach, which is strongly grounded in the text, permits a level of reliability that is an advantage over other methods." In Crofts & Bisman (2010) the use of Leximancer is defended as it avoids the researcher - imposed coding schemes that can be inherently biased. Quotes Atkinson 1992:Atkinson, P. (1992), “The ethnography of a medical setting: reading, writing, and rhetoric”, Qualitative Health Research, Vol. 2 No. 4, pp. 451-74.
Includes a good description of how Leximancer works (with citations). And how they utilized the software - p188: For our purposes, Leximancer provided a means for generating and recognising themes, including themes which might otherwise have been missed or overlooked had we manually coded the data. Using the themes derived by the software, the researchers then went back to engaging directly with the data in order to further explore, and interpret, the meanings of the text."
David Rooney, Knowledge, economy, technology and society: The politics of discourse, Telematics and Informatics, Volume 22, Issue 4, November 2005, Pages 405-422
This article is very in depth on knowledge discourse but uses leximancer for thematic analysis. HAs a table of the top 20 ranked concepts, then uses the concept maps to show how concepts are clumped into themes. It first uses the top 8% of concepts as 'This is the lowest level at which a number of discernable semantic clusters has formed'.
Then it looks at the clusters that are relationally close and makes inferences about the 'distant' cluster. The second concept map it uses is set at 54% of concepts which then shows other themes and how they link to the main four identified themes.
Watson, Glenice and Jamieson-Proctor, Romina and Finger, Glenn (2005) Teachers talk about measuring ICT curriculum integration. Australian Educational Computing, 20 (2). pp. 27-34. ISSN 0816-9020
Leximancer is a software package for identifying the salient dimensions of discourse by analysing the frequency of use of terms, and the spatial proximity between those terms. The Leximancer package uses a grounded theory approach (Glaser &: Strauss, 1967; Strauss &: Corbin, 1990) to data analysis. It computes .the frequency with which each term is used, after discarding text items of no research relevance (such as 'a' or 'the'), but does not include every available word in the final plotted list. Constraints include the number of words selected per block of text as well as the relative frequency with which terms are used.
Sunday, July 1, 2012
Q13 - other comments
This was an interesting final question. There was a number of themes that came from each interviewees final responses.
A number of people talked about the need for support and resources on evaluation. Both during the time of application and also during the project. Suggested ideas included templates and guidelines as well as information available of different frameworks and their benefits.
A few people mentioned having time to look back at the project revisit it an look at impact - but I guess that would depend on whether the project produced an 'product' that could be evaluated for impact.
A few people mentioned the importance of incorporating the evaluation into the research cycle. And there was the mention of the importance of receiving constructive feedback from the university. This could be viewed as a need for identifying study audiences and stakeholders which was another theme that came out re the importance if one required some traction in implementing the outcomes of a project.
Another theme was the forms - make evaluation compulsory and give it importance by having a section on the application, but more importantly on the reporting pro-forma.
And finally the theme of networking was also evident. Participants mentioned the benefit of sharing findings with people who were interested ie L&T related research etc.
A number of people talked about the need for support and resources on evaluation. Both during the time of application and also during the project. Suggested ideas included templates and guidelines as well as information available of different frameworks and their benefits.
A few people mentioned having time to look back at the project revisit it an look at impact - but I guess that would depend on whether the project produced an 'product' that could be evaluated for impact.
A few people mentioned the importance of incorporating the evaluation into the research cycle. And there was the mention of the importance of receiving constructive feedback from the university. This could be viewed as a need for identifying study audiences and stakeholders which was another theme that came out re the importance if one required some traction in implementing the outcomes of a project.
Another theme was the forms - make evaluation compulsory and give it importance by having a section on the application, but more importantly on the reporting pro-forma.
And finally the theme of networking was also evident. Participants mentioned the benefit of sharing findings with people who were interested ie L&T related research etc.
Q12 - the value of doing evaluation
This questions asked participants 'what value did evaluation add to your project'? Even though there were a number of projects that didn't conduct an evaluation, most people attempted to answer by saying hypothetically what they felt rather than what they observed.
The majority of responses mentioned the learning that takes place when one looks back, reflects. A couple of responses mentioned learning from mistakes and valuing that. Evaluation was also seen as a mechanism for keeping the project focussed.
One participant was concerned about the fine line between feeling like you are being checked on and being supported. Another participant agreed that there has to be some amount of accountability.
The majority of responses mentioned the learning that takes place when one looks back, reflects. A couple of responses mentioned learning from mistakes and valuing that. Evaluation was also seen as a mechanism for keeping the project focussed.
One participant was concerned about the fine line between feeling like you are being checked on and being supported. Another participant agreed that there has to be some amount of accountability.
Question 11 - challenges to conducting the evaluation
I'm now looking at the question 'were there any challenges to conducting the evaluation?'
An emerging theme that is coming through is that there is a lot of interference from contextual events. Running a project is something that is done at the same time as teaching, research, marking. Then there is the institutional requirements that occur simultaneously - the playing out of major projects and change procedures that also impact on the time available and possibly the outcomes. Now these all impact on how the project is run and not necessarily to the evaluation but it appears as a consequence, evaluation gets sidelined or is often not as rigorous as perhaps they initially hoped.
Some other themes that emerged:
This last dot point arose because a few people were talking about using evaluation for checks and measures along the way, to refocus. And also to have evaluation incorporated into the approach so it becomes part of what you do and not something extra you have to make time for.
An emerging theme that is coming through is that there is a lot of interference from contextual events. Running a project is something that is done at the same time as teaching, research, marking. Then there is the institutional requirements that occur simultaneously - the playing out of major projects and change procedures that also impact on the time available and possibly the outcomes. Now these all impact on how the project is run and not necessarily to the evaluation but it appears as a consequence, evaluation gets sidelined or is often not as rigorous as perhaps they initially hoped.
Some other themes that emerged:
- Time
- Money
- Administration/resources
- Confusion in answering, talking about the challenges of conducting the research and not the evaluation.
- More money
- More time
- Better planning
- More help with evaluation
- Developmental Evaluation
Saturday, June 30, 2012
Question 10 - reporting strategies and feedback
The following reporting strategies were used:
When asked how their reports were received, 10 out of the 15 participants reported receiving no feedback other than thank-you. This provoked a range of reactions from indifference to relief to annoyance. I think it was clear though that more than a tokenistic feedback is required. Even if it was a follow up by someone to ask about the project - show interest in how it has developed or moved on with.
One participant mentioned that it would be good if they could receive help with publishing in the L&T field rather than their discipline field - which of course they can do easily. Considering a number of people mentioned conference and journal articles then perhaps this is a service the LTC could follow up with?
15 final report to funding body
6 reports or products to audience
5 interim reports
5 conference presentations
4 reports to stakeholders
2 journal articles
2 reports to department (meetings)
2 faculty L&T
1 minutes of project meetings to
stakeholders
1 report to faculty executive When asked how their reports were received, 10 out of the 15 participants reported receiving no feedback other than thank-you. This provoked a range of reactions from indifference to relief to annoyance. I think it was clear though that more than a tokenistic feedback is required. Even if it was a follow up by someone to ask about the project - show interest in how it has developed or moved on with.
One participant mentioned that it would be good if they could receive help with publishing in the L&T field rather than their discipline field - which of course they can do easily. Considering a number of people mentioned conference and journal articles then perhaps this is a service the LTC could follow up with?
Labels:
analysis,
feedback,
publication,
questions,
Reports
Friday, June 29, 2012
Question 9 - usability and generalisibility
The first part of this question asked whether the results of the evaluation were usable. The answers were interesting. 11 answered yes, two answered no and there were two projects who had no answer. In effect since only 5 projects did project evaluation (and a further 2 did some form of product evaluation) its interesting that 11 answered yes. This indicates that they were thinking along the lines of whether the project result were usable and not the evaluation results (although this may have been fault of the interviewer by not clarifying well enough). Still, another example of crossover in meaning of research and evaluation.
I think it would take a brave person to say there was nothing usable that came of their project. In fact one participant talked at length about this. That there was no room in the academic world for admitting failure.
So back to the 11 yes answers. The five projects that had evaluated were included in this number. Therefore it is good to see that there was in fact benefit to evaluating the projects. Furthermore, there were two projects who did product evaluation and one of these answered yes to the question but one answered no. This is an interesting case (14). The product was evaluation but eventually no one ended up implementing the recommendations for and resources produced in this project. reasons quoted included 'no time', 'no one was interested'. This leads to the question - about stakeholders being consulted with at time of application. When we look at the answer to this question, for this project we find that the project leader did not consult and in fact was not sure about what exactly a stakeholder is.
As for the part on Generalisability, there were a range of answers. Some stated categorically no, but others, when pressed reflected and stated various levels of yes from some to a lot. It was clear though that few had thought about this question, and obviously not reported on it. The other theme that emerges is the difference between content and process. Some were stuck in thinking of their project as a discipline specific thing that couldn't possibly be transposed but others mentioned that their process could be followed by anyone doing similar evaluation and projects, though interestingly few had actually explained the process in the report so how could another follow? Perhaps this level of details could be found in research publications.
I think it would take a brave person to say there was nothing usable that came of their project. In fact one participant talked at length about this. That there was no room in the academic world for admitting failure.
So back to the 11 yes answers. The five projects that had evaluated were included in this number. Therefore it is good to see that there was in fact benefit to evaluating the projects. Furthermore, there were two projects who did product evaluation and one of these answered yes to the question but one answered no. This is an interesting case (14). The product was evaluation but eventually no one ended up implementing the recommendations for and resources produced in this project. reasons quoted included 'no time', 'no one was interested'. This leads to the question - about stakeholders being consulted with at time of application. When we look at the answer to this question, for this project we find that the project leader did not consult and in fact was not sure about what exactly a stakeholder is.
As for the part on Generalisability, there were a range of answers. Some stated categorically no, but others, when pressed reflected and stated various levels of yes from some to a lot. It was clear though that few had thought about this question, and obviously not reported on it. The other theme that emerges is the difference between content and process. Some were stuck in thinking of their project as a discipline specific thing that couldn't possibly be transposed but others mentioned that their process could be followed by anyone doing similar evaluation and projects, though interestingly few had actually explained the process in the report so how could another follow? Perhaps this level of details could be found in research publications.
Question 8 - review of evaluation plan
This set of questions asked whether the evaluation plan was reviewed and if so what was the benefit of this review.
From the 6 projects who previously said they had a plan, only 3 of them said they had it reviewed. However I have to say that these answers were a little misguided. What they really meant was that they got feedback on their project at some stage. One mentioned the steering group, another mentioned student and colleague's feedback on the product and the third mentioned fedback from the application review on who should be consulted during implementation of the project.
So in effect non of the evaluation plans was really reviewed before submission and just one mentioned formative checking in with the steering group.
From the 6 projects who previously said they had a plan, only 3 of them said they had it reviewed. However I have to say that these answers were a little misguided. What they really meant was that they got feedback on their project at some stage. One mentioned the steering group, another mentioned student and colleague's feedback on the product and the third mentioned fedback from the application review on who should be consulted during implementation of the project.
So in effect non of the evaluation plans was really reviewed before submission and just one mentioned formative checking in with the steering group.
Saturday, June 23, 2012
Question 7 - Key evaluation questions
I think this set of questions - 'Were Key Evaluation questions stated?' and if yes could they be answered adequately, did not produce any fruitful data. Out of the 15 projects, 13 said no and for the two who said yes, there was no evidence of these questions in the aplication or the report.
This could suggest that there is no clear conception of what evaluation is. Though most people interviewed could talk about the topic, when put on the spot with this question, the lack of evidence points clearly towards the fact that evaluation is not well done or at least not clearly understood or maybe that there are different perceptions of evaluation. In other words there is no 'one size fits all'.
Interesting quote " although not stated, it is understood (implicit)". But is it??
Other reasons for there being no key evaluation questions stated could be due to the inherent nature of action research and developmental evaluation whereby the questions are developed as you go.
This could suggest that there is no clear conception of what evaluation is. Though most people interviewed could talk about the topic, when put on the spot with this question, the lack of evidence points clearly towards the fact that evaluation is not well done or at least not clearly understood or maybe that there are different perceptions of evaluation. In other words there is no 'one size fits all'.
Interesting quote " although not stated, it is understood (implicit)". But is it??
Other reasons for there being no key evaluation questions stated could be due to the inherent nature of action research and developmental evaluation whereby the questions are developed as you go.
Subscribe to:
Posts (Atom)