Latent semantic analysis

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found. Lua error in package.lua at line 80: module 'strict' not found. Latent semantic analysis (LSA) is a technique in natural language processing, in particular distributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text. A matrix containing word counts per paragraph (rows represent unique words and columns represent each paragraph) is constructed from a large piece of text and a mathematical technique called singular value decomposition (SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Words are then compared by taking the cosine of the angle between the two vectors (or the dot product between the normalizations of the two vectors) formed by any two rows. Values close to 1 represent very similar words while values close to 0 represent very dissimilar words.[1]

An information retrieval technique using latent semantic structure was patented in 1988 (US Patent 4,839,853, now expired) by Scott Deerwester, Susan Dumais, George Furnas, Richard Harshman, Thomas Landauer, Karen Lochbaum and Lynn Streeter. In the context of its application to information retrieval, it is sometimes called Latent Semantic Indexing (LSI).[2]

Overview

Occurrence matrix

LSA can use a term-document matrix which describes the occurrences of terms in documents; it is a sparse matrix whose rows correspond to terms and whose columns correspond to documents. A typical example of the weighting of the elements of the matrix is tf-idf (term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance.

This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.

Rank lowering

After the construction of the occurrence matrix, LSA finds a low-rank approximation[3] to the term-document matrix. There could be various reasons for these approximations:

  • The original term-document matrix is presumed too large for the computing resources; in this case, the approximated low rank matrix is interpreted as an approximation (a "least and necessary evil").
  • The original term-document matrix is presumed noisy: for example, anecdotal instances of terms are to be eliminated. From this point of view, the approximated matrix is interpreted as a de-noisified matrix (a better matrix than the original).
  • The original term-document matrix is presumed overly sparse relative to the "true" term-document matrix. That is, the original matrix lists only the words actually in each document, whereas we might be interested in all words related to each document—generally a much larger set due to synonymy.

The consequence of the rank lowering is that some dimensions are combined and depend on more than one term:

{(car), (truck), (flower)} --> {(1.3452 * car + 0.2828 * truck), (flower)}

This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also mitigates the problem with polysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.

Derivation

Let X be a matrix where element (i,j) describes the occurrence of term i in document j (this can be, for example, the frequency). X will look like this:


\begin{matrix} 
 & \textbf{d}_j \\
 & \downarrow \\
\textbf{t}_i^T \rightarrow &
\begin{bmatrix} 
x_{1,1} & \dots & x_{1,n} \\
\vdots & \ddots & \vdots \\
x_{m,1} & \dots & x_{m,n} \\
\end{bmatrix}
\end{matrix}

Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document:

\textbf{t}_i^T = \begin{bmatrix} x_{i,1} & \dots & x_{i,n} \end{bmatrix}

Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term:

\textbf{d}_j = \begin{bmatrix} x_{1,j} \\ \vdots \\ x_{m,j} \end{bmatrix}

Now the dot product \textbf{t}_i^T \textbf{t}_p between two term vectors gives the correlation between the terms over the documents. The matrix product X X^T contains all these dot products. Element (i,p) (which is equal to element (p,i)) contains the dot product \textbf{t}_i^T \textbf{t}_p ( = \textbf{t}_p^T \textbf{t}_i). Likewise, the matrix X^T X contains the dot products between all the document vectors, giving their correlation over the terms: \textbf{d}_j^T \textbf{d}_q = \textbf{d}_q^T \textbf{d}_j.

Now, from the theory of linear algebra, there exists a decomposition of X such that U and V are orthogonal matrices and \Sigma is a diagonal matrix. This is called a singular value decomposition (SVD):


\begin{matrix}
X = U \Sigma V^T
\end{matrix}

The matrix products giving us the term and document correlations then become


\begin{matrix}
X X^T &=& (U \Sigma V^T) (U \Sigma V^T)^T = (U \Sigma V^T) (V^{T^T} \Sigma^T U^T) = U \Sigma V^T V \Sigma^T U^T = U \Sigma \Sigma^T U^T \\
X^T X &=& (U \Sigma V^T)^T (U \Sigma V^T) = (V^{T^T} \Sigma^T U^T) (U \Sigma V^T) = V \Sigma^T U^T U \Sigma V^T = V \Sigma^T \Sigma V^T
\end{matrix}

Since \Sigma \Sigma^T and \Sigma^T \Sigma are diagonal we see that U must contain the eigenvectors of X X^T, while V must be the eigenvectors of X^T X. Both products have the same non-zero eigenvalues, given by the non-zero entries of \Sigma \Sigma^T, or equally, by the non-zero entries of \Sigma^T\Sigma. Now the decomposition looks like this:


\begin{matrix} 
 & X & & & U & & \Sigma & & V^T \\
 & (\textbf{d}_j) & & & & & & & (\hat{\textbf{d}}_j) \\
 & \downarrow & & & & & & & \downarrow \\
(\textbf{t}_i^T) \rightarrow 
&
\begin{bmatrix} 
x_{1,1} & \dots & x_{1,n} \\
\\
\vdots & \ddots & \vdots \\
\\
x_{m,1} & \dots & x_{m,n} \\
\end{bmatrix}
&
=
&
(\hat{\textbf{t}}_i^T) \rightarrow
&
\begin{bmatrix} 
\begin{bmatrix} \, \\ \, \\ \textbf{u}_1 \\ \, \\ \,\end{bmatrix} 
\dots
\begin{bmatrix} \, \\ \, \\ \textbf{u}_l \\ \, \\ \, \end{bmatrix}
\end{bmatrix}
&
\cdot
&
\begin{bmatrix} 
\sigma_1 & \dots & 0 \\
\vdots & \ddots & \vdots \\
0 & \dots & \sigma_l \\
\end{bmatrix}
&
\cdot
&
\begin{bmatrix} 
\begin{bmatrix} & & \textbf{v}_1 & & \end{bmatrix} \\
\vdots \\
\begin{bmatrix} & & \textbf{v}_l & & \end{bmatrix}
\end{bmatrix}
\end{matrix}

The values \sigma_1, \dots, \sigma_l are called the singular values, and u_1, \dots, u_l and v_1, \dots, v_l the left and right singular vectors. Notice the only part of U that contributes to \textbf{t}_i is the i\textrm{'th} row. Let this row vector be called \hat{\textrm{t}}_i. Likewise, the only part of V^T that contributes to \textbf{d}_j is the j\textrm{'th} column, \hat{ \textrm{d}}_j. These are not the eigenvectors, but depend on all the eigenvectors.

It turns out that when you select the k largest singular values, and their corresponding singular vectors from U and V, you get the rank k approximation to X with the smallest error (Frobenius norm). This approximation has a minimal error. But more importantly we can now treat the term and document vectors as a "semantic space". The vector \hat{\textbf{t}}_i then has k entries mapping it to a lower-dimensional space dimensions. These new dimensions do not relate to any comprehensible concepts. They are a lower-dimensional approximation of the higher-dimensional space. Likewise, the vector \hat{\textbf{d}}_j is an approximation in this lower-dimensional space. We write this approximation as

X_k = U_k \Sigma_k V_k^T

You can now do the following:

  • See how related documents j and q are in the low-dimensional space by comparing the vectors \Sigma_k \hat{\textbf{d}}_j and \Sigma_k \hat{\textbf{d}}_q (typically by cosine similarity).
  • Comparing terms i and p by comparing the vectors \Sigma_k \hat{\textbf{t}}_i and \Sigma_k \hat{\textbf{t}}_p.
  • Documents and term vector representations can be clustered using traditional clustering algorithms like k-means using similarity measures like cosine.
  • Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.

To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:

\hat{\textbf{d}}_j = \Sigma_k^{-1} U_k^T \textbf{d}_j

Note here that the inverse of the diagonal matrix \Sigma_k may be found by inverting each nonzero value within the matrix.

This means that if you have a query vector q, you must do the translation \hat{\textbf{q}} = \Sigma_k^{-1} U_k^T \textbf{q} before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:

\textbf{t}_i^T = \hat{\textbf{t}}_i^T \Sigma_k V_k^T
\hat{\textbf{t}}_i^T = \textbf{t}_i^T V_k^{-T} \Sigma_k^{-1} = \textbf{t}_i^T V_k \Sigma_k^{-1}
\hat{\textbf{t}}_i = \Sigma_k^{-1}  V_k^T \textbf{t}_i

Applications

The new low-dimensional space typically can be used to:

  • Compare the documents in the low-dimensional space (data clustering, document classification).
  • Find similar documents across languages, after analyzing a base set of translated documents (cross language retrieval).
  • Find relations between terms (synonymy and polysemy).
  • Given a query of terms, translate it into the low-dimensional space, and find matching documents (information retrieval).
  • Find the best similarity between small groups of terms, in a semantic way (i.e. in a context of a knowledge corpus), as for example in multi choice questions MCQ answering model.[4]

Synonymy and polysemy are fundamental problems in natural language processing:

  • Synonymy is the phenomenon where different words describe the same idea. Thus, a query in a search engine may fail to retrieve a relevant document that does not contain the words which appeared in the query. For example, a search for "doctors" may not return a document containing the word "physicians", even though the words have the same meaning.
  • Polysemy is the phenomenon where the same word has multiple meanings. So a search may retrieve irrelevant documents containing the desired words in the wrong meaning. For example, a botanist and a computer scientist looking for the word "tree" probably desire different sets of documents.

Commercial applications

LSA has been used to assist in performing prior art searches for patents.[5]

Applications in human memory

The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas of free recall and memory search. There is a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as the Semantic Proximity Effect.[6]

When participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall.[7]

Another model, termed Word Association Spaces (WAS) is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs.[8]

Implementation

The SVD is typically computed using large matrix methods (for example, Lanczos methods) but may also be computed incrementally and with greatly reduced resources via a neural network-like approach, which does not require the large, full-rank matrix to be held in memory.[9] A fast, incremental, low-memory, large-matrix SVD algorithm has recently been developed.[10] MATLAB and Python implementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution. In recent years progress has been made to reduce the computational complexity of SVD; for instance, by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality.[11]

Limitations

Some of LSA's drawbacks include:

  • The resulting dimensions might be difficult to interpret. For instance, in
{(car), (truck), (flower)} ↦ {(1.3452 * car + 0.2828 * truck), (flower)}
the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to
{(car), (bottle), (flower)} ↦ {(1.3452 * car + 0.2828 * bottle), (flower)}
will occur. This leads to results which can be justified on the mathematical level, but have no interpretable meaning in natural language.
  • LSA cannot capture polysemy (i.e., multiple meanings of a word)[citation needed]. Each occurrence of a word is treated as having the same meaning due to the word being represented as a single point in space. For example, the occurrence of "chair" in a document containing "The Chair of the Board" and in a separate document containing "the chair maker" are considered the same. The behavior results in the vector representation being an average of all the word's different meanings in the corpus, which can make it difficult for comparison. However, the effect is often lessened due to words having a predominant sense throughout a corpus (i.e. not all meanings are equally likely).

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Markovsky I. (2012) Low-Rank Approximation: Algorithms, Implementation, Applications, Springer, 2012, ISBN 978-1-4471-2226-5[page needed]
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. doi: 10.1109/ICCSNT.2011.6182070
  12. J Transl Med. 2014 Nov 27;12(1):324.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found. Original article where the model was first exposed.
  • Lua error in package.lua at line 80: module 'strict' not found. (PDF). Illustration of the application of LSA to document retrieval.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.

External links

Articles on LSA

Talks and demonstrations

Implementations

Due to its cross-domain applications in Information Retrieval, Natural Language Processing (NLP), Cognitive Science and Computational Linguistics, LSA has been implemented to support many different kinds of applications.

  • Sense Clusters, an Information Retrieval-oriented perl implementation of LSA
  • S-Space Package, a Computational Linguistics and Cognitive Science-oriented Java implementation of LSA
  • Semantic Vectors applies Random Projection, LSA, and Reflective Random Indexing to Lucene term-document matrices
  • Infomap Project, an NLP-oriented C implementation of LSA (superseded by semanticvectors project)
  • Text to Matrix Generator, A MATLAB Toolbox for generating term-document matrices from text collections, with support for LSA
  • Gensim contains a Python implementation of LSA for matrices larger than RAM.