Internal standards were once very important in analytical chemistry, and are still crucial for some types of instrumental analysis. When quantitative analysis of a sample component is required, the unknown sample is compared to a known amount of a reference standard. However, errors can occur if the analysis conditions cannot be reproduced exactly for all samples and standards.

An example would be variations in the volume of a sample injected manually when performing gas chromatography. If only a few microliters of sample are injected by hand, differences in the actual volume injected of 5 to 10% from one sample to the next would not be out of the question. Variability to that extent is not acceptable for many applications. Regulatory guidelines for pharmaceutical analysis typically require a relative standard deviation of 1% or less for multiple analyses of the same sample for a validated method. An internal standard is therefore used to correct for variations in analytical conditions. The same amount of a known concentration of a substance is added to all samples and standards to be analyzed. Instead of directly comparing the instrumental response of a sample versus a reference standard, the ratios of analyte response versus internal standard response are compared. Variations in analytical conditions such as injection volume have no effect on this analyte to internal standard ratio, effectively eliminating major sources of experimental error.