Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation¶
This is an example of applying
sklearn.decomposition.LatentDirichletAllocation on a corpus
of documents and extract additive models of the topic structure of the
corpus. The output is a list of topics, each represented as a list of
terms (weights are not shown).
Non-negative Matrix Factorization is applied with two different objective functions: the Frobenius norm, and the generalized Kullback-Leibler divergence. The latter is equivalent to Probabilistic Latent Semantic Indexing.
The default parameters (n_samples / n_features / n_components) should make the example runnable in a couple of tens of seconds. You can try to increase the dimensions of the problem, but be aware that the time complexity is polynomial in NMF. In LDA, the time complexity is proportional to (n_samples * iterations).
Loading dataset... done in 1.111s. Extracting tf-idf features for NMF... done in 0.307s. Extracting tf features for LDA... done in 0.280s. Fitting the NMF model (Frobenius norm) with tf-idf features, n_samples=2000 and n_features=1000... done in 0.233s. Topics in NMF model (Frobenius norm): Topic #0: just people don think like know time good make way really say right ve want did ll new use years Topic #1: windows use dos using window program os drivers application help software pc running ms screen files version card code work Topic #2: god jesus bible faith christian christ christians does heaven sin believe lord life church mary atheism belief human love religion Topic #3: thanks know does mail advance hi info interested email anybody looking card help like appreciated information send list video need Topic #4: car cars tires miles 00 new engine insurance price condition oil power speed good 000 brake year models used bought Topic #5: edu soon com send university internet mit ftp mail cc pub article information hope program mac email home contact blood Topic #6: file problem files format win sound ftp pub read save site help image available create copy running memory self version Topic #7: game team games year win play season players nhl runs goal hockey toronto division flyers player defense leafs bad teams Topic #8: drive drives hard disk floppy software card mac computer power scsi controller apple mb 00 pc rom sale problem internal Topic #9: key chip clipper keys encryption government public use secure enforcement phone nsa communications law encrypted security clinton used legal standard Fitting the NMF model (generalized Kullback-Leibler divergence) with tf-idf features, n_samples=2000 and n_features=1000... done in 0.814s. Topics in NMF model (generalized Kullback-Leibler divergence): Topic #0: people don just like think did say time make know really right said things way ve course didn question probably Topic #1: windows help thanks using hi looking info video dos pc does anybody ftp appreciated mail know advance available use card Topic #2: god does jesus true book christian bible christians religion faith believe life church christ says know read exist lord people Topic #3: thanks know bike interested mail like new car edu heard just price list email hear want cars thing sounds reply Topic #4: 10 00 sale time power 12 new 15 year 30 offer condition 14 16 model 11 monitor 100 old 25 Topic #5: space government number public data states earth security water research nasa general 1993 phone information science technology provide blood internet Topic #6: edu file com program soon try window problem remember files sun send library article mike wrong think code win manager Topic #7: game team year games play win season points world division won players nhl flyers toronto case cubs teams ll record Topic #8: drive think hard software disk drives apple computer mac need scsi card don problem read floppy post cable going ii Topic #9: use good just key chip got like ll way clipper doesn keys don better speed stuff want sure going need Fitting LDA models with tf features, n_samples=2000 and n_features=1000... done in 3.504s. Topics in LDA model: Topic #0: edu com mail send graphics ftp pub available contact university list faq ca information cs 1993 program sun uk mit Topic #1: don like just know think ve way use right good going make sure ll point got need really time doesn Topic #2: christian think atheism faith pittsburgh new bible radio games alt lot just religion like book read play time subject believe Topic #3: drive disk windows thanks use card drives hard version pc software file using scsi help does new dos controller 16 Topic #4: hiv health aids disease april medical care research 1993 light information study national service test led 10 page new drug Topic #5: god people does just good don jesus say israel way life know true fact time law want believe make think Topic #6: 55 10 11 18 15 team game 19 period play 23 12 13 flyers 20 25 22 17 24 16 Topic #7: car year just cars new engine like bike good oil insurance better tires 000 thing speed model brake driving performance Topic #8: people said did just didn know time like went think children came come don took years say dead told started Topic #9: key space law government public use encryption earth section security moon probe enforcement keys states lunar military crime surface technology
# Author: Olivier Grisel <email@example.com> # Lars Buitinck # Chyi-Kwei Yau <firstname.lastname@example.org> # License: BSD 3 clause from time import time from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer from sklearn.decomposition import NMF, LatentDirichletAllocation from sklearn.datasets import fetch_20newsgroups n_samples = 2000 n_features = 1000 n_components = 10 n_top_words = 20 def print_top_words(model, feature_names, n_top_words): for topic_idx, topic in enumerate(model.components_): message = "Topic #%d: " % topic_idx message += " ".join([feature_names[i] for i in topic.argsort()[:-n_top_words - 1:-1]]) print(message) print() # Load the 20 newsgroups dataset and vectorize it. We use a few heuristics # to filter out useless terms early on: the posts are stripped of headers, # footers and quoted replies, and common English words, words occurring in # only one document or in at least 95% of the documents are removed. print("Loading dataset...") t0 = time() data, _ = fetch_20newsgroups(shuffle=True, random_state=1, remove=('headers', 'footers', 'quotes'), return_X_y=True) data_samples = data[:n_samples] print("done in %0.3fs." % (time() - t0)) # Use tf-idf features for NMF. print("Extracting tf-idf features for NMF...") tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, max_features=n_features, stop_words='english') t0 = time() tfidf = tfidf_vectorizer.fit_transform(data_samples) print("done in %0.3fs." % (time() - t0)) # Use tf (raw term count) features for LDA. print("Extracting tf features for LDA...") tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, max_features=n_features, stop_words='english') t0 = time() tf = tf_vectorizer.fit_transform(data_samples) print("done in %0.3fs." % (time() - t0)) print() # Fit the NMF model print("Fitting the NMF model (Frobenius norm) with tf-idf features, " "n_samples=%d and n_features=%d..." % (n_samples, n_features)) t0 = time() nmf = NMF(n_components=n_components, random_state=1, alpha=.1, l1_ratio=.5).fit(tfidf) print("done in %0.3fs." % (time() - t0)) print("\nTopics in NMF model (Frobenius norm):") tfidf_feature_names = tfidf_vectorizer.get_feature_names() print_top_words(nmf, tfidf_feature_names, n_top_words) # Fit the NMF model print("Fitting the NMF model (generalized Kullback-Leibler divergence) with " "tf-idf features, n_samples=%d and n_features=%d..." % (n_samples, n_features)) t0 = time() nmf = NMF(n_components=n_components, random_state=1, beta_loss='kullback-leibler', solver='mu', max_iter=1000, alpha=.1, l1_ratio=.5).fit(tfidf) print("done in %0.3fs." % (time() - t0)) print("\nTopics in NMF model (generalized Kullback-Leibler divergence):") tfidf_feature_names = tfidf_vectorizer.get_feature_names() print_top_words(nmf, tfidf_feature_names, n_top_words) print("Fitting LDA models with tf features, " "n_samples=%d and n_features=%d..." % (n_samples, n_features)) lda = LatentDirichletAllocation(n_components=n_components, max_iter=5, learning_method='online', learning_offset=50., random_state=0) t0 = time() lda.fit(tf) print("done in %0.3fs." % (time() - t0)) print("\nTopics in LDA model:") tf_feature_names = tf_vectorizer.get_feature_names() print_top_words(lda, tf_feature_names, n_top_words)
Total running time of the script: ( 0 minutes 6.446 seconds)
Estimated memory usage: 50 MB