Difference Between Regular Grammar and Context-Free Grammar
By Felicia Lee
Updated September 28, 2017
Grammar means something very different to linguists and computer programmers than it does to most people. While most of us think of grammar as a set of etiquette rules for socially acceptable language use, linguists and programmers think of grammar as something far more powerful: The set of rules that can generate any and all the possible expressions in a given real or artificial language or fragment of a language. Regular and context-free grammars are the two logically possible types of grammar and differ from each other in the types of rules they allow and the types of expressions they can produce.
The linguist Noam Chomsky developed the notions of context-free and regular grammars in his 1959 work “On Certain Formal Properties of Grammars.” He posited the existence of several basic grammar types, which differ from each other in terms of the complexity of the linguistic expressions they can produce. Regular grammars are simpler and less productive than context-free grammars.
Difference Between Rules
Regular and context-free grammars differ in the types of rules they allow. The rules of context-free grammars allow possible sentences as combinations of unrelated individual words (which Chomsky calls “terminals”) and groups of words (phrases, or what Chomsky calls “non-terminals”). Context-free grammars allow individual words and phrases in any order and allow sentences with any number of individual words and phrases. Regular grammars, on the other hand, allow only individual words along with a single phrase per sentence. Furthermore, phrases in regular grammars must appear in the same position in every sentence or phrase, generated by the grammar.
Because context-free grammars allow a wider range of rules than regular grammars, they can generate a wider range of structures than regular grammars. For instance, they can involve various possible structures of phrases, such as “a girl from the city with money problems” (here, the structures will vary depending on whether “with money problems” describes the city or the girl). Regular grammars cannot do this.Rather, they can generate only simple expressions consisting of strings of single, structurally independent words and possibly a single larger phrase (such as “very, very smart people”).
Context-free grammars are used in natural language processing to generate and parse language data because they can capture many of the defining features of human language, such as their potential for infinitely recursive structures. Regular grammars, which generate only a subset of the expressions of context-free grammars, are also used for natural language processing. However, they can only replicate or process short and grammatically simple linguistic expressions, such as short expressions typically found in informal dialogue.
- “Information and Control, Volume 2”; On Certain Formal Properties of Grammars; Noam Chomsky; 1959
- New Mexico State University; CS 370: Grammars; Joe Pfeiffer
- University of Massachusetts, Amherst; Lecture 5: Context Free Grammars; Andrew McCallum; Fall 2007
- “Logic, Language, and Meaning: Volume 1”; L.T.F. Gamut; 1991
Felicia Lee is a freelance writer/editor and published author with over 15 years of experience. Her work has appeared in publications including the "Los Angeles Times" and on Salon.com. Felicia holds a Bachelor of Arts and a Master of Arts in English from Stanford and a Doctor of Philosophy in linguistics from University of California, Los Angeles.