sklearn.decomposition
.LatentDirichletAllocation¶
-
class
sklearn.decomposition.
LatentDirichletAllocation
(n_topics=10, doc_topic_prior=None, topic_word_prior=None, learning_method='online', learning_decay=0.7, learning_offset=10.0, max_iter=10, batch_size=128, evaluate_every=-1, total_samples=1000000.0, perp_tol=0.1, mean_change_tol=0.001, max_doc_update_iter=100, n_jobs=1, verbose=0, random_state=None)[源代码]¶ Latent Dirichlet Allocation with online variational Bayes algorithm
0.17 新版功能.
Parameters: n_topics : int, optional (default=10)
Number of topics.
doc_topic_prior : float, optional (default=None)
Prior of document topic distribution theta. If the value is None, defaults to 1 / n_topics. In the literature, this is called alpha.
topic_word_prior : float, optional (default=None)
Prior of topic word distribution beta. If the value is None, defaults to 1 / n_topics. In the literature, this is called eta.
learning_method : ‘batch’ | ‘online’, default=’online’
Method used to update _component. Only used in fit method. In general, if the data size is large, the online update will be much faster than the batch update. Valid options:
'batch': Batch variational Bayes method. Use all training data in each EM update. Old `components_` will be overwritten in each iteration. 'online': Online variational Bayes method. In each EM update, use mini-batch of training data to update the ``components_`` variable incrementally. The learning rate is controlled by the ``learning_decay`` and the ``learning_offset`` parameters.
learning_decay : float, optional (default=0.7)
It is a parameter that control learning rate in the online learning method. The value should be set between (0.5, 1.0] to guarantee asymptotic convergence. When the value is 0.0 and batch_size is
n_samples
, the update method is same as batch learning. In the literature, this is called kappa.learning_offset : float, optional (default=10.)
A (positive) parameter that downweights early iterations in online learning. It should be greater than 1.0. In the literature, this is called tau_0.
max_iter : integer, optional (default=10)
The maximum number of iterations.
total_samples : int, optional (default=1e6)
Total number of documents. Only used in the partial_fit method.
batch_size : int, optional (default=128)
Number of documents to use in each EM iteration. Only used in online learning.
evaluate_every : int optional (default=0)
How often to evaluate perplexity. Only used in fit method. set it to 0 or and negative number to not evalute perplexity in training at all. Evaluating perplexity can help you check convergence in training process, but it will also increase total training time. Evaluating perplexity in every iteration might increase training time up to two-fold.
perp_tol : float, optional (default=1e-1)
Perplexity tolerance in batch learning. Only used when
evaluate_every
is greater than 0.mean_change_tol : float, optional (default=1e-3)
Stopping tolerance for updating document topic distribution in E-step.
max_doc_update_iter : int (default=100)
Max number of iterations for updating document topic distribution in the E-step.
n_jobs : int, optional (default=1)
The number of jobs to use in the E-step. If -1, all CPUs are used. For
n_jobs
below -1, (n_cpus + 1 + n_jobs) are used.verbose : int, optional (default=0)
Verbosity level.
random_state : int or RandomState instance or None, optional (default=None)
Pseudo-random number generator seed control.
Attributes: components_ : array, [n_topics, n_features]
Topic word distribution.
components_[i, j]
represents word j in topic i. In the literature, this is called lambda.n_batch_iter_ : int
Number of iterations of the EM step.
n_iter_ : int
Number of passes over the dataset.
References
- [1] “Online Learning for Latent Dirichlet Allocation”, Matthew D. Hoffman,
- David M. Blei, Francis Bach, 2010
- [2] “Stochastic Variational Inference”, Matthew D. Hoffman, David M. Blei,
- Chong Wang, John Paisley, 2013
- [3] Matthew D. Hoffman’s onlineldavb code. Link:
- http://www.cs.princeton.edu/~mdhoffma/code/onlineldavb.tar
Methods
fit
(X[, y])Learn model for the data X with variational Bayes method. fit_transform
(X[, y])Fit to data, then transform it. get_params
([deep])Get parameters for this estimator. partial_fit
(X[, y])Online VB with Mini-Batch update. perplexity
(X[, doc_topic_distr, sub_sampling])Calculate approximate perplexity for data X. score
(X[, y])Calculate approximate log-likelihood as score. set_params
(**params)Set the parameters of this estimator. transform
(X)Transform data X according to the fitted model. -
__init__
(n_topics=10, doc_topic_prior=None, topic_word_prior=None, learning_method='online', learning_decay=0.7, learning_offset=10.0, max_iter=10, batch_size=128, evaluate_every=-1, total_samples=1000000.0, perp_tol=0.1, mean_change_tol=0.001, max_doc_update_iter=100, n_jobs=1, verbose=0, random_state=None)[源代码]¶
-
fit
(X, y=None)[源代码]¶ Learn model for the data X with variational Bayes method.
When learning_method is ‘online’, use mini-batch update. Otherwise, use batch update.
Parameters: X : array-like or sparse matrix, shape=(n_samples, n_features)
Document word matrix.
Returns: self :
-
fit_transform
(X, y=None, **fit_params)[源代码]¶ Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.
Parameters: X : numpy array of shape [n_samples, n_features]
Training set.
y : numpy array of shape [n_samples]
Target values.
Returns: X_new : numpy array of shape [n_samples, n_features_new]
Transformed array.
-
get_params
(deep=True)[源代码]¶ Get parameters for this estimator.
Parameters: deep: boolean, optional :
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns: params : mapping of string to any
Parameter names mapped to their values.
-
partial_fit
(X, y=None)[源代码]¶ Online VB with Mini-Batch update.
Parameters: X : array-like or sparse matrix, shape=(n_samples, n_features)
Document word matrix.
Returns: self :
-
perplexity
(X, doc_topic_distr=None, sub_sampling=False)[源代码]¶ Calculate approximate perplexity for data X.
Perplexity is defined as exp(-1. * log-likelihood per word)
Parameters: X : array-like or sparse matrix, [n_samples, n_features]
Document word matrix.
doc_topic_distr : None or array, shape=(n_samples, n_topics)
Document topic distribution. If it is None, it will be generated by applying transform on X.
Returns: score : float
Perplexity score.
-
score
(X, y=None)[源代码]¶ Calculate approximate log-likelihood as score.
Parameters: X : array-like or sparse matrix, shape=(n_samples, n_features)
Document word matrix.
Returns: score : float
Use approximate bound as score.