[Logo]

Link Grammar Parser


News

July, 2018: link-grammar 5.5.1 released! See below for a description of recent changes.

What is Link Grammar?

The Link Grammar Parser is a syntactic parser of English, Russian, Arabic and Persian (and other languages as well), based on Link Grammar, an original theory of syntax and morphology. Given a sentence, the system assigns to it a syntactic structure, which consists of a set of labelled links connecting pairs of words. The parser also produces a "constituent" (HPSG style phrase tree) representation of a sentence (showing noun phrases, verb phrases, etc.). The RelEx extension provides Stanford-parser compatible dependency grammar output.

The theory of Link Grammar parsing, and the original version of the parser was created in 1991 by Davy Temperley, John Lafferty and Daniel Sleator, at the time professors of linguistics and computer science at the Carnegie Mellon University. It is the product of decades of academic research into grammar and morphology, and is discussed in numerous publications.

Although based on the original Carnegie-Mellon code base, the current Link Grammar package has dramatically evolved and is profoundly different from earlier versions. There have been innumerable bug fixes; performance has improved by more than an order of magnitude. The package is fully multi-threaded, fully UTF-8 enabled, and has been scrubbed for security, enabling cloud deployment. Parse coverage of English has been dramatically improved; other languages have been added (most notably, Russian). There is a raft of new features, including support for morphology, log-likelihood semantic selection, and a sophisticated tokenizer that moves far beyond white-space-delimited sentence-splitting.

Quick Overview

The parser includes API's in various different programming languages, as well as a handy command-line tool for playing with it. Here's some typical output:

              linkparser> This is a test!
                 Linkage 1, cost vector = (UNUSED=0 DIS= 0.00 LEN=6)
              
                  +-------------Xp------------+
                  +----->WV----->+---Ost--+   |
                  +---Wd---+-Ss*b+  +Ds**c+   |
                  |        |     |  |     |   |
              LEFT-WALL this.p is.v a  test.n !
              
              (S (NP this.p) (VP is.v (NP a test.n)) !)
              
                          LEFT-WALL    0.000  Wd+ hWV+ Xp+
                             this.p    0.000  Wd- Ss*b+
                               is.v    0.000  Ss- dWV- O*t+
                                  a    0.000  Ds**c+
                             test.n    0.000  Ds**c- Os-
                                  !    0.000  Xp- RW+
                         RIGHT-WALL    0.000  RW-

This rather busy display illustrates many interesting things. For example, the Ss*b link connects the verb and the subject, and indicates that the subject is singular. Likewise, the Ost link connects the verb and the object, and also indicates that the object is singular. The WV (verb-wall) link points at the head-verb of the sentence, while the Wd link points at the head-noun. The Xp link connects to the trailing punctuation. The Ds**c link connects the noun to the determiner: it again confirms that the noun is singular, and also that the noun starts with a consonant. (The PH link, not required here, is used to force phonetic agreement, distinguishing 'a' from 'an'). These link types are documented in the English Link Documentation.

The bottom of the display is a listing of the "disjuncts" used for each word. The disjuncts are simply a list of the connectors that were employed to form the links. They are particularly interesting because they serve as an extremely fine-grained form of a "part of speech" or "grammatical category", although they also can be interpreted as "semantic selections". Thus, for example: the disjunct S- O+ indicates a transitive verb: its a verb that takes both a subject and an object. The additional markup above indicates that 'is' is not only being used as a transitive verb, but it also indicates finer details: a transitive verb that took a singular subject, and was used (is usable as) the head verb of a sentence. The floating-point value is the "cost" of the disjunct; it very roughly captures the log-likelihood of this particular grammatical (and semantic!) usage. Much as parts-of-speech correlate with word-meanings, so also fine-grained parts-of-speech correlate with much finer distinctions and gradations of meaning.

The link-grammar parser also supports morphological analysis. Here is an example in Russian:

              linkparser> это теста
                 Linkage 1, cost vector = (UNUSED=0 DIS= 0.00 LEN=4)
              
                           +-----MVAip-----+
                  +---Wd---+       +-LLCAG-+
                  |        |       |       |
              LEFT-WALL это.msi тест.= =а.ndnpi

The LL link connects the stem 'тест' to the suffix 'а'. The MVA link connects only to the suffix, because, in Russian, it is the suffixes that carry all of the syntactic structure, and not the stems. The Russian lexis is documented here.

Theory

An extended overview and summary of Link Grammar can be found on the Link Grammar Wikipedia page, which touches on most of the important, primary aspects of the theory. However, it is no substitute for the original papers published on the topic:

A fairly comprehensive bibliography of papers written before 2004 is here and is mirrored here. A sampling of publications that reference Link Grammar in some way can be found here; some of these may be downloaded here.

Documentation

There is an extensive set of pages documenting the English dictionary; specifically, the names of links and their meanings, as well as how to write new rules. There is also a short primer for creating dictionaries for new languages.

The documentation for the C/C++ programming API is here. Bindings for other programming languages, including python and java, can be found in the bindings directory in the GitHub Link Grammar Repo.

System Summary

  • Actively maintained! New releases typically happen quarterly.
  • Besides English, there is a comprehensive Russian dictionary, thanks to Sergey Protasov. The Persian and Arabic subsystems were provided by John Dehdari. A modest (thousand-word) German dictionary is included. There are proof-of-concept dictionaries for Lithuanian, Indonesian, Kazakh, Vietnamese, Hebrew and Turkish.
  • Morphology is enabled by means of a sophisticated tokenization system, able to simultaneously track ambiguous splitting of words into morphemes.
  • Multiple programming language bindings are available, including Ruby, Python, perl, Lisp, Java, Ocaml and AutoIt.
  • Fully multi-threaded; a standard build system; pkg-config integration; a CMake config file, dynamic/shared library support; pre-defined Docker containers; support for Linux and non-Linux platforms, including Windows, MacOSX, FreeBSD.
  • A network (TCP/IP) parse server provides JSON-formatted parse results; support for Linux and non-Linux platforms, including Windows, MacOSX, FreeBSD.
  • Several security audits have been performed, including testing for mal-formed input. Should be secure and robust in cloud deployments.
  • Source code hosted at GitHub.
  • LGPL v2.1 license; see endnote for details.


Downloading Link Grammar

The source code to the system can be downloaded as a tarball. The current stable version is Link Grammar 5.5.1 (July, 2018). Older versions are available here.

GitHub hosts the primary link-grammar repository. Issues (bugs) should be reported there. Developers who are not a part of the core development team should not use or deploy the source from github. It is unstable and frequently buggy and broken! All users should use the tarballs, only!

Mailing Lists

The mailing list for Link Grammar discussion is at the link-grammar google group.

Subscribe to link-grammar:

Enter email:


Ongoing development by OpenCog

Ongoing development of Link Grammar is guided and supported by the Open Cognition project, where the parser plays an important role in the OpenCog natural language processing subsystem. Research and implementation is ongoing; current work includes investigations into unsupervised learning of language, unsupervised learning of morphology, semantically guided parsing and grammatically induced word-sense disambiguation.

Stanford Parser Compatibility

A sibling project, RelEx, uses constraint-grammar-like techniques to extract dependency relations that are compatible with the Stanford parser. It's performance is comparable to the Stanford PCFG parsing model, and is more than three times faster than the Stanford "lexicalized" (factored) model.

The RelEx project is no longer in active development. We learned (the hard way) that the native Link Grammar parses contain much more information than the Stanford dependency markup is capable of supporting. The Stanford-style dependencies are simply are not rich or sophisticated enough to produce the kind of data needed for semantic analysis and comprehension, viz. tasks such as predicate-argument extraction, framing, semantic selection, and the like.

Language generation

For sentence generation, i.e. the creation of grammatically correct sentences from a bag of semantic relations, the microplanner and surface realization (sureal) portion of OpenCog is strongly recommended. A short example is here. These "sort-of work", but not very well. The primary issue is that they do not make use of the statistical information available in language to choose likely or reasonable sentence constructions.

We previously recommended two projects that should now be considered obsolete: NLGen and NLGen2. For your entertainment, they're still listed below: The NLGen and NLGen2 projects provide natural language generation modules, based on, and compatible with link-grammar and RelEx. They implement the SegSim ideas for NL generation. See the following YouTube videos of a virtual dog, showing some of NLGen's capabilities (circa 2009): Demo of Virtual Dog Learning to Play Fetch via Imitation and Reinforcement, AI Virtual Dog's Emotions Fluctuate Based on Its Experiences, Demo of Embodied Anaphora Resolution and AI Virtual Dog Answers Simple Questions about Itself and Its Environment.


Linguistic Disclaimer

Link Grammar is a natural language parser, not a human-level artificial general intelligence. This means that there are many sentences that it cannot parse correctly, or at all. There are entire classes of speech and writing that it cannot handle, including twitter posts, IRC chat logs, Valley-girl basilect, Old and Middle English, stock-market listings and raw HTML dumps.

Link Grammar works best with "newspaper English", as taught to and written by those educated in American colleges: standard-sized sentences, with proper grammar, proper punctuation, and correct capitalization. Link Grammar has difficulties with the following types of textual input:

  • Phrases (that are not a part of a complete sentence).
  • Twitter posts. These tend to be sentence fragments, often lacking proper grammatical structure.
  • Any text containing a large number of spelling errors.
  • "Registers", such as newspaper headlines, where determiners are omitted; for example, "Thieves rob bank."
  • Dialog, stage plays and movie scripts. Such dialog tends to consist of interleaved sentences.
  • Speech-to-text output. Such systems generate large numbers of mis-heard words that, taken at face value cannot be a part of valid sentences. Even if such recognition was perfect, spoken English tends not to be as well-constructed or grammatical as written English.
  • Support for British English and Commonwealth English is poor. This includes any English dialects spoken in India, Pakistan, Nigeria, Bangladesh, South Africa, as well as former American protectorates, such as the Phillipines. British and regional spelling of words is missing from the dictionaries.
  • Slang and various regional non-middle-class-American dialects. This includes most dialects spoken by anyone living in economically poor or under-educated geographical regions, whether in urban housing projects or the red-state small-town and rural poor. Self-identifying subgroup dialects are also not handled, such as drug-culture, gang-culture and hacker-culture.
  • Long run-on sentences. These can generate thousands of alternative parses in a combinatorial explosion.

It is hoped that the unsupervised learning of language proposal will be of sufficient power and ability to handle most of these exceptional cases. Work is currently ongoing.


Natural Language Support

Ranked in order of maturity.

English
The main English documentation is here.
Russian
A set of Russian dictionaries providing full coverage for the language have been incorporated into the main distribution as of version 4.7.10 (March 2013). An older version, from which these are derived, can be found at http://slashzone.ru/parser/. By Sergey Protasov. Includes link documentation (mirror) and subscript (morphology) documentation (mirror). Russian morpheme dictionaries can be had at http://aot.ru.

Документация по связям и по классам слов доступна в виде списка примеров.

Persian
The Persian dictionaries from Jon Dehdari have been incorporated into the main distribution, as of version 5.0.0 (April 2014). This includes a copy of the Persian stemming engine, as significant morphology analysis needs to be performed to parse Persian.
Arabic
The Arabic dictionaries from Jon Dehdari have been incorporated into the main distribution, as of version 5.0.0 (April 2014). These are derived from the older, original version. [Mirror] These require the Aramorph stemming package, which is included.
German
A small German dictionary, consisting of 850 words, is included. A brief description is provided here.
Lithuanian
A small Lithuanian prototype dictionary has been created. It contains a few hundred words. A few basic sentences parse just fine; the current version focuses on morphological analysis coupled with grammatical analysis. Documentation is here.

Sukurta yra labai prasta Lietuvių kalbos žodynas; beveik neiks ikį šiol neveikia. Čia dokumentacija.

Vietnamese
A small Vietnamese prototype dictionary has been created. It contains several hundred words.
Indonesian
A small Indonesian prototype dictionary has been created. It contains about one hundred words.
Hebrew
A very small Hebrew prototype dictionary has been created. It contains a few dozen words. Almost nothing works correctly (yet).
Kazakh
A very small Kazakh prototype dictionary has been created. It contains a few dozen words. Almost nothing works correctly (yet).
Turkish
A very small Turkish prototype dictionary has been created. It contains a few dozen words. Almost nothing works correctly (yet).
French, Luthor project
The Luthor project aims to develop a set of scripts to automatically construct Link Grammar linkage dictionaries by mining Wiktionary data. Current efforts are focusing on French. (This project appears to be defunct).

Adjunct Projects

The default distribution for Link Grammar includes bindings for Java, Python, OCaML, Common Lisp, and AutoIt, as well as a SWIG FFI interface file. Additional language bindings, and some related projects, are listed below:

RelEx Semantic Relation Extractor
RelEx is an English-language semantic relationship extractor, built on the Link Parser. It can identify subject, object, indirect object and many other relationships between words in a sentence. It will also provide part-of-speech tagging, noun-number tagging, verb tense tagging, gender tagging, and so on. RelEx includes a basic implementation of the Hobbs anaphora (pronoun) resolution algorithm.
Ruby bindings
Ruby bindings are coordinated at the Ruby-LinkParser website. The code can be found at the ged/link-parser github page.
Perl bindings
The perl bindings, created by Danny Brian, have been updated. See the Lingua-LinkParser page on CPAN. There is also a tutorial written against an older version of the bindings; some details may be different.
Psi Toolkit (Perl)
The Psi Toolkit, an NLP toolkit aimed at linguists and NLP engineers, includes bindings for link-grammar, via perl.
Javascript
Obsolete Javascript bindings can be found at the dijs/link-grammar github page. Someone, please port these to the latest version!
Pre-parsed Wikipedia
Parsed versions of various texts, including all articles from a May 2008 dump of Wikipedia, as well as a partial parse of an October 2010 dump, are available at http://gnucash.org/linas/nlp/data/

Recent Changes

Version 5.5.1 (27 July 2018)

  • Fix broken Java bindings build.
  • English dict: Fix clause openers with questions.
  • English dict: Various misc fixes.
  • English dict: Various paraphrasing verbs
  • Bring the SQL-backed dict to production state.
  • Convert MSVC build to MSVC15 (Visual Studio 2017).
  • Restore the repeatability of the produced linkages.

Version 5.5.0 (29 April 2018)

  • Fix accidental API breakage that impacts OpenCog.
  • Fix memory leak when parsing with null links.
  • Python bindings: Add an optional parse-option argument to parse().
  • Add an extended version API and use it in "link-parser --version".
  • Fix spurious errors if the last dict line is a comment.
  • Fix garbage report if EOF encountered in a quoted dict word.
  • Fix garbage report if whitespace encountered in a quoted dict word.
  • Add a per-command help in link-parser.
  • Add a command line completion in link-parser.
  • Enable build of word-graph printing support by default.
  • Add idiom lookup in link-parser's dict lookup command (!!idiom_here).
  • Improve handling of quoted words (e.g. single words in "scare quotes").
  • Fix random selection of linkages so that it's actually random.

Version 5.4.4 (11 March 2018)

  • Dictionary loading now thread safe.
  • Fix post-nominal modifiers used with pronouns.
  • Fix comparative openers.
  • Fix given-name single-letter abbreviations.
  • Fix conjoined questions and conjoined WH-statements.
  • Fix conditional sentences.
  • Fix misc comparatives.
  • Fix crash on invalid UTF-8 input.
  • Fix many predicative adjective uses.
  • Fix many paraphrasing-type constructions.
  • Minor cleanup of word-lists.
  • New dict definition LENGTH-LIMIT-n to limit connector link length to n.
  • Speed up parsing of Russian by factor of 2x.
  • Add assorted technical vocabulary (#680)
  • Fix conjoined infinitives.

Version 5.4.3 (4 January 2018)

  • Fix man page installation (actually broken from 5.3.0).
  • Add "thither" to the English dictionary.
  • Fix printing inf loop for very narrow screen widths.
  • Some Windows code clean up.
  • Remove trailing blanks from the linkage diagram.
  • Fix square area and cubic volume measurements (English dict).
  • Fix assorted exclamations and responses (English dict).
  • Fix displaying random linkages on Windows.
  • Fix unit tokenization to remove ambiguity.
  • Fix utf8-related bug on Windows that could affect printing.
  • Add missing affix file, needed for the 'any' language.

Version 5.4.2 (19 October 2017)

  • Fix man page build (broken in 5.4.1).

Version 5.4.1 (18 October 2017)

  • Fix man page installation (broken in 5.3.8).
  • Add affix-class MPUNC for splitting at intra-word punctuation.
  • Fix crash when there is no PP info.
  • Fix a stack buffer overflow.
  • Eliminate hard-wired linkage diagram size limitations.
  • Fix an unintended clipping of the linkage-limit option to 250000.

Version 5.4.0 (26 July 2017)

Notable: This reorganizes the source code into subdirectories, grouped according to the processing stage. This should make it easier to understand what the major components are, and which files & functions are a part of each component.

  • Fix for missing locale info in Windows XP.
  • Empty out the post-processing tables for the any, ady, amy languages
  • Remove left_print_string() from the API.
  • Recover pp_lexer.l from ancient version 2.2!
  • Fix unusual crash in post-processing for the "any" language.
  • Remove three deprecated post-processing functions from API.
  • Major reorganization of code base into more modular directories.
  • Revive the sqlite3 dictionary into operational form.
  • Add double-quotes to splittable punctuation for the "any" language.
  • Add API functions to get linkage word positions in the sentence.
  • Fix printing of diagrams containing Chinese or other wide glyphs.
  • Fix `make distclean` when ant not installed.
A list of older changes can be found here.

Website

Issues concerning this website should be addressed to Linas Vepstas - <linasvepstas@gmail.com> or Dom Lachowicz - <domlachowicz@gmail.com>.

License

Current versions of the Link Grammar parser software, language dictionaries and documentation are available under the LGPL v2.1 license. Versions prior to 5.0.0 are available under a variant of the BSD license.

Copyright (c) 2003-2004 Daniel Sleator, David Temperley, and John Lafferty. All rights reserved.
Copyright (c) 2003 Peter Szolovits
Copyright (c) 2004,2012,2013 Sergey Protasov
Copyright (c) 2006 Sampo Pyysalo
Copyright (c) 2007 Mike Ross
Copyright (c) 2008,2009,2010 Borislav Iordanov
Copyright (c) 2008-2018 Linas Vepstas
Copyright (c) 2014-2018 Amir Plivatsky