Describe some ways to explainthe practical significance of statistical effect sizes.

What will be an ideal response?


1. Difference on the Original Measurement Scale: When the original outcome measure has inherent practical meaning, the effect size may be stated directly as the difference between the outcome for the intervention and control groups on that measure.
2. Comparison with Test Norms or Performance of a Normative Population: For programs that aim to raise the outcomes for a target population to mainstream levels, program effects may be stated in terms of the extent to which the program effect reduces the gap between the pre-intervention outcomes and the mainstream level.
3. Differences between Criterion Groups: When data on relevant outcome measures are available for groups with recognized differences in the program context, program effects can be compared to those differences on the respective outcome measures.
4. Proportion over a Diagnostic or Other Pre-Established Success Threshold: When a value on an outcome measure can be set as the threshold for success, the proportion of the intervention group with successful outcomes can be compared to the proportion of the control group with such outcomes.
5. Proportion over an Arbitrary Success Threshold: Expressing a program effect in terms of a success rate may help depict its practical significance even if the success rate threshold is arbitrary.
6. Comparison with the Effects of Similar Programs: The evaluation literature may provide information about the statistical effects for similar programs on similar outcomes that can be compiled to identify those that are small and large relative to what other programs have achieved. Meta-analyses that systematically compile and report statistical effect sizes are especially useful for this purpose. An effect size for the number of consecutive days without smoking after a smoking cessation program could be viewed as having greater practical significance if it was above the average effect size reported in a meta-analysis of smoking cessation programs, and less practical significance if it was well below that average. Conventional Guidelines: Cohen (1988) provided guidelines for what are generally “small,” “medium,” and “large” effect sizes in social science research. For standardized mean difference effect sizes, for instance, Cohen suggested that .20 was a small effect, .50 a medium one, and .80 a large one. However, these were put forward to illustrate the role of effect sizes in statistical power analysis, and Cohen cautioned against using them when the particular research context was known so that options more specific to that context were available. They are, nonetheless, widely used as rules of thumb for judging the magnitude of intervention effects despite their potentially misleading implications.

Political Science

You might also like to view...

In McCulloch v. Maryland, which clause of the U.S. Constitution did the Supreme Court interpret as preventing Maryland from taxing the national bank?

a. commerce b. supremacy c. due process d. equal protection

Political Science

In___________, the Supreme Court invalidated a warrant that was issued by the state's attorney general, a law enforcement officer, rather than by a judge or magistrate

a. Stanford v. Texas (1965) b. Coolidge v. New Hampshire (1971) c. Brinegar v. United States (1949) d. Illinois v. Gates (1983)

Political Science

In the 2006, the Democrats took control of the Senate, but not the House

Indicate whether the statement is true or false

Political Science

Free random assignment works best with

A) large samples. B) small samples. C) ad hoc samples. D) matched samples.

Political Science