Means,
Toyama, Murphy, Bakia, Jones (2010) Evaluation of Evidence-Based Pratices in
online Learning: A Meta-Analysis and review of Online Learning Studies
I’m interested in this article for the
methodology of how they went about their meta-evaluation (although I have to
say the content is really interesting too and leads itself to the FLaMe program).
They specify the type of studies they were
evaluating, these studies must have stringent designs (random-assignment or
controlled quasi-experimental) and examine the effects for objective measures
(of student learning). In relation to my project, I have not really specified
the sample such as this and have used a sample of convenience – will this hold
up to rigorous testing?
Furthermore I am not looking at quantitative studies. Evaluations tend to be more qualitative in their nature.
there was a main finding from the literature review - few rigorous studies in the area of interest had been published. In fact non had been found in the years of search and so the search was expanded by 2 years.
There were key findings from the meta-analysis and then there were findings from the narrative review. This review came from analysing the studies that were not able to be included in the meta-analysis due to the fact that they did not have a particular control condition. (p.xii)
Interesting comments on the potential for bias stemming from studies' authors dual roles as experimenters and instructors
The lit review and Meta-analysis were guided by four research questions. Context for the meta-analysis was given, explaining who commissioned the study (stakeholders) and the overall goal of the study. Then they described a conceptual framework for the topic [online learning] which included three key components.
Methodology
First define the topic [online learning], state what is included and what is not. Use this to define the categories to be searched for (three in this study).
List data sources and search strategies. Give years from and to and mention dates of different say for thesis etc.
List the databases that were used and the list of keywords (in the appendix).
Additional search activities - included a review of articles cited in recent meta-analyses and narrative syntheses of similar topics.
Key journals were identified and abstracts from each of these in a given period were manually reviewed.
A Google Scholar search engine was used with a series of keywords and any article abstracts retrieved were reviewed to ensure no duplication from pervious searches.
Scrrening was then carried out in 2 stages. Firstly abstracts were reviewed, giving studies the 'benefit of the doubt' (p.11) ensuring they met the inclusion criteria. Statement of number of papers included and number rejected (as percentages) with reasons for rejection.
Full text screen - the next pass had two stages, studies had to meet content relevance criteria (listed) as well as basic Quality (method) criteria (also listed.). A table of the primary reasons for exclusion was included detailing numbers for each reason and percentages. (p.13)
The coding of the study features was then detailed, regarding study features and study quality, then a paragraph explaining how the interrater reliability was checked. Finally data analysis was explained, but this is completely statistical since the chosen studies were all quantitative, and a meta-analysis software was used for computations.
Interesting comments on the potential for bias stemming from studies' authors dual roles as experimenters and instructors
The lit review and Meta-analysis were guided by four research questions. Context for the meta-analysis was given, explaining who commissioned the study (stakeholders) and the overall goal of the study. Then they described a conceptual framework for the topic [online learning] which included three key components.
Methodology
First define the topic [online learning], state what is included and what is not. Use this to define the categories to be searched for (three in this study).
List data sources and search strategies. Give years from and to and mention dates of different say for thesis etc.
List the databases that were used and the list of keywords (in the appendix).
Additional search activities - included a review of articles cited in recent meta-analyses and narrative syntheses of similar topics.
Key journals were identified and abstracts from each of these in a given period were manually reviewed.
A Google Scholar search engine was used with a series of keywords and any article abstracts retrieved were reviewed to ensure no duplication from pervious searches.
Scrrening was then carried out in 2 stages. Firstly abstracts were reviewed, giving studies the 'benefit of the doubt' (p.11) ensuring they met the inclusion criteria. Statement of number of papers included and number rejected (as percentages) with reasons for rejection.
Full text screen - the next pass had two stages, studies had to meet content relevance criteria (listed) as well as basic Quality (method) criteria (also listed.). A table of the primary reasons for exclusion was included detailing numbers for each reason and percentages. (p.13)
The coding of the study features was then detailed, regarding study features and study quality, then a paragraph explaining how the interrater reliability was checked. Finally data analysis was explained, but this is completely statistical since the chosen studies were all quantitative, and a meta-analysis software was used for computations.
No comments:
Post a Comment
Thank you for your comments!