Chapter 9 - Introduction to the t Statistic
This is a short chapter that introduces the common t-test that most scientists frequently use, and that you have undoubtedly heard about before now. Please read and understand the whole chapter. I will summarize the most important points here.
Up to this point, we have been using a test statistic, , that allows us to make an inference about whether our sample is different from some population mean. In order to use , we had to know four things, the population mean and standard deviation, our sample mean, and our sample size. As you know, however, we typically do not know the population standard deviation. Instead, we pick a sample from the population and test whether the sample mean is significantly different from some other value. To conduct this sort of test we compute what we refer to as the one-sample t statistic, or t-test. The t is pretty similar to the z, in fact, its almost identical. The only difference is that we have to estimate the population standard deviation, . Remember, if you know , then use the z-test; if you dont know , then estimate (find ) as described below and in the text, and use the t-test.
We have already discussed how to estimate from a sample of scores. The formula is . Notice that our estimate uses n - 1 in the denominator. The main point of this chapter can be boiled down to the following: To calculate the t-test, we calculate the standard error of the estimate,, and use the formula . Notice the similarity between the z-test and the t-test. The only difference is that in the z-test we use , and in the t-test we use .
Once we have calculated a t for our sample, we have to compare it to some critical value(s) that we look up in a table. When we used the z-test, we used the normal distribution table to find the critical values for a specific . We assumed that z-scores were normally distributed. Unlike z-scores, t-scores are not perfectly normally distributed. This is due to the fact that we are estimating the population variability, and we can never estimate it perfectly, especially if we have a very small n.
Therefore, we have to use a different table (Table B.2) to find the critical values for a t-test, and the critical values depend on our sample size. In general, our critical values are smaller with a big n than they are with a small n. In other words, if we use a big sample size, we do not have to have as big a t-score to reject as we would need with a small sample size. As mentioned in Chapter 8, the "power" of the test increases with a large n.
In order to find the critical value(s) in the table, you have to know the a that you will be using, whether your test is one-tailed or two, and the degrees of freedom (df). Degrees of freedom is a function of the number of independent data values in your sample and the number of parameters that you must estimate in your statistic. In the t-test, the degrees of freedom is the total number of subjects (which were independently selected from the population) minus one, because we are estimating one parameter, the population standard deviation. Thus, df = n - 1.
Read about the two assumptions of the t-test. The first is that the values in your sample should be independent of each other. In other words, the selection of one value from the population should not affect the selection of another. This is typically accomplished by random sampling.
The second assumption is that your population should be normally distributed. Just as with the z-test, if it is not normal, then your distribution of samples will not be normal unless your sample size is large.
The nice thing about the t-test is that it can be used in many situations where you do not know the population variability. The authors have provided some examples of these types situations.
There are a number of exercises that will be helpful. Your goal is to understand how to test hypotheses with the one-sample t-test. Once again, try the odds.