Main Characteristics of the Z-test, T-test, F-test, and other Pointers as it relates to ANOVAs and Chi-Square

Power ( Z-test )

A parametric test

-The probability of making a correct decision when Ho is false

-As the power of an experiment increases, the probability of making a   Type II error decreases

– Power + Beta = 1

Methods of increasing power:

– Increasing the size of effect for IV

– Increasing sample size

– Decrease variability

Sampling Distribution:

-A Distribution of all the possible values a statistic can take, along with the probability of getting each value if sampling is random from the null- hypothesis population



-It includes all of the possible values of a statistic

-It includes the frequency or probability of each value

Normal Deviate Test:

-The Z test applied to sample means


-Experiment involves a single sample of means

-Used when parameters of the Null Hypothesis Population are known

-Sampling distribution of the mean should be known and normally distributed

Conditions for use of Z Test:

-Single sample

– Mean and standard deviation are known

-Sampling distribution of mean is normally distributed or N > 30

Null-hypothesis Population:

-Actual or theoretical set of population scores that would result if the experiment were done on the entire population and the IV had no effect

Critical Region:

-The area under the curve that contains all the values of the statistic that allow rejection of Ho

Critical value:

-The value of the statistic that bounds the critical region

Single Sample t Test

-Less powerful than the z test

-Has more extreme values than the z test

-As df increases, t becomes more similar to z

-Analyzes raw scores

-Use t Test:

-The experiment has only one sample

-Mean is specified, but standard deviation is unknown

-Sampling distribution of the mean must be normal

-N > 30

-Sampling distribution of t:

-A probability distribution of the t values which would occur if all possible different samples of a fixed size N were drawn from the Null Hypothesis Population

-Characteristics of t Distributions:

-Family of curves

-Shaped similarly to z distributions if normally distributed or n > 30

Degrees of freedom (df):

-The number of scores that are free to vary

-The higher the df, the lower the critical value

Cohen’s d:

-Determines size of effect

-The larger the estimated d, the greater the size of effect

Confidence Interval:

-Range of values which probably contains the population mean

-The larger the interval, the more confidence we have that the interval contains the population mean

Correlated and Independent Groups T-test

-Analyzes difference scores

-A parametric test


-Sampling distribution must be normally distributed

Correlated Groups:

-Pairs of subjects matched on one or more characteristics

-Mean always = 0

-Advantageous when:

-High correlation between paired scores

-Low variability in difference scores and high variability in raw scores

Independent Measures (Repeated Measures):

-Used more often

-Each subject participates in both conditions

– Random sampling of subjects

– Random assignment

– Each subject tested only once

– Raw scores are analyzed

– t Test analyzes difference between sample means

-Sampling distribution of  x1 – x2  is normally distributed

-Homogeneity of variance

-Advantageous when:

-Experiments do not allow same subject to be used twice

-To increase the df

ANOVA (analysis of variance) – F test

-A parametric test

-Used to analyze data from experiments that use two or more groups or conditions

-Used instead of pairwise t tests in order to hold the probability of making a Type I error at alpha

-F test allows us to make one overall comparison that tells whether there is significant difference between the means of the groups

-ANOVA can be used for Independent Groups design or Repeated Measures design


-F is never negative

-F distribution is positively skewed

-With equal n’s, median F value equals one

-F distribution is a family of curves that varies with df

One-Way ANOVA:

– H1 is nondirectional

– H0 states that the different conditions are equally effective

– Assumes IV only affects the mean of the scores, not the variance


-Populations normally distributed

-Homogeneity of variance

-F is minimally affected by violations of population normality and  homogeneity of variance

-Size of effect:

-Eta squared

-Power of the ANOVA:

-Increasing sample size increases power

-The larger the real effect of the independent variable the higher the power is to detect a real effect

-The higher the sample variability, the lower the power to detect a real effect

Multiple Comparisons (Q statistic):

-Used when ANOVA is used on k > 2 groups

– Q statistics  (studentized range distributions)

-A priori or planned comparison:

-These comparisons are planned in advance

-Does not correct for higher probability of Type I error

-More powerful than post hoc tests

-A posteriori or post hoc comparisons:

-Maintain the Type I error rate at alpha

-ex: Q statistic

-Tukey HSD Test

Two-way ANOVA (analysis of variance):

-Allows us to evaluate the effect of two independent variables and the interaction between them


-Populations from which samples are drawn are normally distributed

-Homogeneity of variance

-As long as sample sizes are equal, ANOVA is robust to violations

Chi-Square and other Nonparametric Tests:


-ex: Z-test, T-test, F-test

-Generally more powerful and versatile

-Generally robust to violations

-Use parametric whenever possible

-Depend substantially on population characteristics


-ex: Chi-Square

-Depends minimally on population characteristics

-Distribution free tests

-Only use when you cannot use a parametric


– Used with nominal data (categories)

-Tests if the observed results differ significantly from the results expected if H0 were true

-Family of curves

-The larger the discrepancy between observed and expected results, the more  unreasonable that H0 is true


-Independence exists between each observation in the contingency table

-Sample size is large enough so that the expected frequency in each cell is at least 5

-If tables is 1×2 or 2×2 then expected frequency should be at least 10

-Chi-square can be used with any type of scaling

Contingency table:

-Shows contingency between two variables where the variables have been classified into mutually exclusive categories and the cell entries are frequencies