When doing hypothesis testing, there is a null hypothesis and one or more alternative hypotheses. The null hypothesis is the one that assumes some effect you are searching for is not present in the population. If it is determined that the effect is present, the null hypothesis is rejected in favor of an alternative hypothesis. If the effect is not found with a high enough certainty, the null hypothesis is not rejected.

A type II error is one where the effect is present in the population, but the sample taken doesn't provide enough evidence to reject the null hypothesis. If you are testing whether two types of fertilizer produce the same crop yield, the type II error would occur if both (a) the mean crop yields are different, and (b) the hypothesis test did not allow you to reject the null hypothesis that μ12.

To be terse: The probability of committing a type II error is the probability of drawing a value X from N(μ, σ) within the acceptance region for the test.

"What?!"

Fear not. In a z-test, you will reject the null hypothesis if X ends up outside the acceptance region for the test. So, logically, you will not reject the null hypothesis if X is inside the acceptance region.

A type II error implies that the null hypothesis is false. What is the probability of committing a type II error if the actual distribution of data is N(μ, σ)? In other words, what is the probability of drawing an X within N(μ, σ) that lies in the acceptance region?

An Example

H0: μ = 0
Ha: μ ≠ 0
α = .05

σ = 2.05
n = 10

Acceptance region: ( 0-z*σ/√n , 0+z*σ/√n )
                   (   -1.271  ,   1.271    )

This means that we will accept H0 if X lies in (-1.271, 1.271). So, what if the distribution of data is actually N(2, 2.05) and not N(0, 2.05) as H0 assumed? Well, what's the probability of getting a mean in that interval with the N(2, 2.05/√(10)) distribution (it's over √(10) because our mean is computed from 10 data points)? Here is a poor illustration and the related calculations:

       |         |    |
       |         | .'¯|¯'.
       |         |/   |   \
       |         |    |    \
       |       ,,|    |     ..
_______|_,,,/////|    |       '''...__
-------|---------|----|----------------
    -1.271     1.271  2
                     (μ)


P( -1.271 < X < 1.271 ), X~N(2, 2.05/√(10))

P( -5.05 < z < -1.12 )

P( z < -1.12 ) - P ( z < -5.05 )

   .1314       -     .0000

   .1314

Hence, the probability of a type II error is .1314. So if the actual mean of the distribution is 2, there is a 13.14% chance that the mean of 10 values pulled from that distribution won't let you reject the null hypothesis.

This process is similar for a t-test, except you obviously use t* instead of z* and you will use a t-table (or calculator) to find the probability instead of a z-table.

Notes about the Type II Error

The probability of a type II error is represented by the variable β. The number 1-β is called the power of the test, and it is the the probability of rightfully rejecting the null hypothesis when it is false and the true mean of the distribution is some given value.

Both α (the alpha level) and β are statistical errors and generally you want them as low as possible. Lowering α is easy, you just redo the analysis with a lower alpha level. This, however, will increase β because by decreasing α, you have widened the acceptance region for the test. The best way to lower β without having to increase α is to increase the sample size. This will decrease the standard deviation of the distribution and will consequently narrow the acceptance region.

Yates, Daniel, David Moore, and George McCabe. The Practice of Statistics.