In formal language theory
, two grammar
s are said to be equivalent iff
they generate (describe) the same language
For example, the context-free grammars
S -> bS
S -> Sb
S -> a
S -> bS
S -> aT
T -> bT
describe the same language, namely, all strings consisting of exactly one
and one or more
s. Therefore, they are equivalent. The second grammar is regular
, which basically shows that the full power of context-free grammars is not required to describe this language: a regular expression
This is a type of question you see recurring more often: given some context-free grammar, is it possible to construct an equivalent grammar of a certain more restricted form?
It turns out to be a complicated problem; most basic theory was laid down in the 60s.
Context-free grammars can easily be rewritten to more restricted forms, so-called normal forms. Well-known normal forms are Chomsky normal form and Greibach normal form.
Clearly, if some normalization procedure could rewrite all equivalent grammars to the same normal form, we would have an easy procedure to decide if two grammars are equivalent - but that turns out to be impossible: the equivalence of context-free grammars is undecidable.
(It is decidable for deterministic context-free grammars, which can only describe a proper subset of the context-free languages.)
But what if the grammar at hand really describes a type of language for which equivalence is decidable? For instance, thinking of the example above, can we find a normalization procedure that will rewrite any grammar to a regular grammar if the language is regular? Again, the answer is negative: the question whether some arbitrary grammar describes a regular language is undecidable.