The problem of topic modeling can be seen as a generalization of the
clustering problem, in that it posits that observations are generated due to
multiple latent factors (e.g., the words in each document are generated as a
mixture of several active topics, as opposed to just one). This increased
representational power comes at the cost of a more challenging unsupervised
learning problem of estimating the topic probability vectors (the distributions
over words for each topic), when only the words are observed and the
corresponding topics are hidden.
We provide a simple and efficient learning procedure that is guaranteed to
recover the parameters for a wide class of mixture models, including the
popular latent Dirichlet allocation (LDA) model. For LDA, the procedure
correctly recovers both the topic probability vectors and the prior over the
topics, using only trigram statistics (i.e., third order moments, which may be
estimated with documents containing just three words). The method, termed
Excess Correlation Analysis (ECA), is based on a spectral decomposition of low
order moments (third and fourth order) via two singular value decompositions
(SVDs). Moreover, the algorithm is scalable since the SVD operations are
carried out on $k\times k$ matrices, where $k$ is the number of latent factors
(e.g. the number of topics), rather than in the $d$-dimensional observed space
(typically $d \gg k$).